Autocompleting inequality

Large language models and the “alignment problem”

Authors

  • Mike Zajko

DOI:

https://doi.org/10.33621/jdsr.v7i154879

Keywords:

generative AI, alignment, inequality, language

Abstract

The latest wave of AI hype has been driven by ‘generative AI’ systems exemplified by ChatGPT, which was created by OpenAI’s ‘fine-tuning’ of a large language model (LLM). This process involves using human labor to provide feedback on generative outputs in order to bring these into greater ‘alignment’ with ‘safety’. This article analyzes the fine-tuning of generative AI as a process of social ordering, beginning with the encoding of cultural dispositions into LLMs, their containment and redirection into vectors of ‘safety’, and the subsequent challenge of these ‘guard rails’ by users. Fine-tuning becomes a means by which some social hierarchies are reproduced, reshaped, and flattened. By analyzing documentation provided by generative AI developers, I show how fine-tuning makes use of human judgement to reshape the algorithmic reproduction of inequality, while also arguing that the most important values driving AI alignment are commercial imperatives and aligning with political economy.

References

Abid, Abubakar, Maheen Farooqi, and James Zou. 2021. “Large Language Models Associate Muslims with Violence.” Nature Machine Intelligence 3(6):461–63. https://doi.org/10.1038/s42256-021-00359-2

Aguirre, A., G. Dempsey, H. Surden, and P. B. Reiner. 2020. “AI Loyalty: A New Paradigm for Aligning Stakeholder Interests.” IEEE Transactions on Technology and Society 1(3):128–37. https://doi.org/10.1109/TTS.2020.3013490

Airoldi, Massimo. 2022. Machine Habitus: Toward a Sociology of Algorithms. Polity Press.

Alba, Davey. 2023. “Google’s AI Chatbot Is Trained by Humans Who Say They’re Overworked, Underpaid and Frustrated.” Bloomberg. Retrieved July 12, 2023 (https://web.archive.org/web/20230712123122/https://www.bloomberg.com/news/articles/2023-07-12/google-s-ai-chatbot-is-trained-by-humans-who-say-they-re-overworked-underpaid-and-frustrated).

Aleem, Zeeshan. 2023. “No, ChatGPT Isn’t Willing to Destroy Humanity out of ‘Wokeness.’” MSNBC.Com. Retrieved January 15, 2024 (https://www.msnbc.com/opinion/msnbc-opinion/chatgpt-slur-conservatives-woke-elon-rcna69724).

Altman, Sam. 2021. “Moore’s Law for Everything.” Retrieved September 9, 2023 (https://moores.samaltman.com/).

Anthropic. 2022. “Hh-Rlhf.” Retrieved July 12, 2023 (https://github.com/anthropics/hh-rlhf).

Armano, David. 2023. “LLM Inc.: Every Business Will Have Have Their Own Large Language Model.” Forbes. Retrieved October 20, 2023 (https://www.forbes.com/sites/davidarmano/2023/09/20/llm-inc-every-business-will-have-have-their-own-large-language-model/).

Askell, Amanda, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Jackson Kernion, Kamal Ndousse, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, and Jared Kaplan. 2021. “A General Language Assistant as a Laboratory for Alignment.” Retrieved October 27, 2023 (http://arxiv.org/abs/2112.00861).

Auerbach, David. 2013. “Filling the Void.” Slate, November 19. Retrieved October 27, 2023 (https://slate.com/technology/2013/11/google-autocomplete-the-results-arent-always-what-you-think-they-are.html).

Bai, Yuntao, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. 2022. “Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback.” Retrieved October 27, 2023 (http://arxiv.org/abs/2204.05862).

Baum, Jeremy, and John Villasenor. 2024. “Rendering Misrepresentation: Diversity Failures in AI Image Generation.” Brookings Institution. April 17. Retrieved September 1, 2024 (https://www.brookings.edu/articles/rendering-misrepresentation-diversity-failures-in-ai-image-generation/).

Belanger, Ashley. 2023. “ChatGPT users drop for the first time as people turn to uncensored chatbots.” Ars Technica. Retrieved July 7, 2023 (https://arstechnica.com/tech-policy/2023/07/chatgpts-user-base-shrank-after-openai-censored-harmful-responses/)

Benjamin, Ruha. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge, U.K.: Polity Press.

Bianchi, Federico, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, and Aylin Caliskan. 2023. “Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale.” 2023 ACM Conference on Fairness, Accountability, and Transparency: 1493–1504. https://doi.org/10.1145/3593013.3594095.

Bourdieu, Pierre. 1991. Language and Symbolic Power. Harvard University Press.

Charrington, Sam. 2023. “Ensuring LLM Safety for Production Applications with Shreya Rajpal.” The TWIML AI Podcast. Retrieved October 27, 2023 (https://twimlai.com/podcast/twimlai/ensuring-llm-safety-for-production-applications/).

Chiang, Ted. 2017. “Silicon Valley Is Turning Into Its Own Worst Fear.” BuzzFeed News. Retrieved July 14, 2020 (https://www.buzzfeednews.com/article/tedchiang/the-real-danger-to-civilization-isnt-ai-its-runaway).

Christian, Brian. 2020. The Alignment Problem: Machine Learning and Human Values. W. W. Norton & Company.

Costanza-Chock, Sasha. 2020. Design Justice: Community-Led Practices to Build the Worlds We Need. Cambridge, MA: MIT Press.

Davis, Jenny L., Apryl Williams, and Michael W. Yang. 2021. “Algorithmic Reparation.” Big Data & Society 8(2): 1–12. https://doi.org/10.1177/20539517211044808

Deshpande, Ameet, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, and Karthik Narasimhan. 2023. “Toxicity in ChatGPT: Analyzing Persona-Assigned Language Models.” Retrieved Oct 27, 2023 (http://arxiv.org/abs/2304.05335).

Dotan, Tom, and Deepa Seetharaman. 2023. “Big Tech Struggles to Turn AI Hype Into Profits; Microsoft, Google and Others Experiment with How to Produce, Market and Charge for New Tools.” Wall Street Journal. Retrieved October 13, 2023 (https://www.wsj.com/tech/ai/ais-costly-buildup-could-make-early-products-a-hard-sell-bdd29b9f).

Edwards, Benj. 2024. “Google’s Hidden AI Diversity Prompts Lead to Outcry over Historically Inaccurate Images.” Ars Technica. Retrieved August 23, 2024 (https://arstechnica.com/information-technology/2024/02/googles-hidden-ai-diversity-prompts-lead-to-outcry-over-historically-inaccurate-images/).

Eubanks, Virginia. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York, N.Y.: St. Martin’s Press.

Ezquer, Evan. 2023. “JailBreaking ChatGPT: How to Activate DAN & Other Alter Egos.” Metaroids. Retrieved August 2, 2023 (https://metaroids.com/learn/jailbreaking-chatgpt-everything-you-need-to-know/).

Fourcade, Marion, and Fleur Johns. 2020. “Loops, Ladders and Links: The Recursivity of Social and Machine Learning.” Theory and Society 49(5): 803–32. https://doi.org/10.1007/s11186-020-09409-x

Fournier-Tombs, Eleonore. 2023. Gender Reboot: Reprogramming Gender Rights in the Age of AI. Palgrave Macmillan.

Fraser, Colin. 2023a. “ChatGPT: Automatic Expensive BS at Scale.” Medium. Retrieved July 19, 2023 (https://medium.com/@colin.fraser/chatgpt-automatic-expensive-bs-at-scale-a113692b13d5).

Fraser, Colin. 2023b. “Who are we talking to when we talk to these bots?” Medium. Retrieved September 1, 2024 (https://medium.com/@colin.fraser/who-are-we-talking-to-when-we-talk-to-these-bots-9a7e673f8525).

Gabriel, Iason. 2020. “Artificial Intelligence, Values, and Alignment.” Minds and Machines 30(3): 411–37. https://doi.org/10.1007/s11023-020-09539-2

Gallegos, Isabel O., Ryan A. Rossi, Joe Barrow, Md Mehrab Tanjim, Sungchul Kim, Franck Dernoncourt, Tong Yu, Ruiyi Zhang, and Nesreen K. Ahmed. 2024. “Bias and Fairness in Large Language Models: A Survey.” Computational Linguistics 50(3). https://doi.org/10.1162/coli_a_00524

Ganguli, Deep, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, Andy Jones, Sam Bowman, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Nelson Elhage, Sheer El-Showk, Stanislav Fort, Zac Hatfield-Dodds, Tom Henighan, Danny Hernandez, Tristan Hume, Josh Jacobson, Scott Johnston, Shauna Kravec, Catherine Olsson, Sam Ringer, Eli Tran-Johnson, Dario Amodei, Tom Brown, Nicholas Joseph, Sam McCandlish, Chris Olah, Jared Kaplan, and Jack Clark. 2022. “Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned.” Retrieved October 27, 2023 (http://arxiv.org/abs/2209.07858).

Ghosh, Sourojit, and Aylin Caliskan. 2023. “ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores Non-Gendered Pronouns: Findings across Bengali and Five Other Low-Resource Languages.” Proceedings of AAAI/ACM Conference on AI, Ethics, and Society (AIES ’23). https://doi.org/10.48550/arXiv.2305.10510

Gibbs, Samuel. 2016. “Google Alters Search Autocomplete to Remove ‘are Jews Evil’ Suggestion.” The Guardian, December 5. Retrieved September 1, 2024 (https://www.theguardian.com/technology/2016/dec/05/google-alters-search-autocomplete-remove-are-jews-evil-suggestion).

Gillespie, Tarleton. 2024. “Generative AI and the Politics of Visibility.” Big Data & Society 11(2). https://doi.org/10.1177/20539517241252131

Giovanola, Benedetta, and Simona Tiribelli. 2022. “Weapons of Moral Construction? On the Value of Fairness in Algorithmic Decision-Making.” Ethics and Information Technology 24(1): 3. https://doi.org/10.1007/s10676-022-09622-5

Green, Ben, and Lily Hu. 2018. “The Myth in the Methodology: Towards a Recontextualization of Fairness in Machine Learning.” Retrieved October 27, 2023 (https://scholar.harvard.edu/files/bgreen/files/18-icmldebates.pdf).

Gross, Nicole. 2023. “What ChatGPT Tells Us about Gender: A Cautionary Tale about Performativity and Gender Biases in AI.” Social Sciences 12(8): 435. https://doi.org/10.3390/socsci12080435

Hagendorff, Thilo, and Sarah Fabi. 2022. “Methodological Reflections for AI Alignment Research Using Human Feedback.” Retrieved October 27, 2023 (http://arxiv.org/abs/2301.06859).

Hao, Karen. 2023. “The Hidden Workforce That Helped Filter Violence and Abuse Out of ChatGPT.” Wall Street Journal. Retrieved July 12, 2023 (https://www.wsj.com/podcasts/the-journal/the-hidden-workforce-that-helped-filter-violence-and-abuse-out-of-chatgpt/ffc2427f-bdd8-47b7-9a4b-27e7267cf413).

Heaven, Will Douglas. 2020. “How to Make a Chatbot That Isn’t Racist or Sexist.” MIT Technology Review. Retrieved October 27, 2023 (https://www.technologyreview.com/2020/10/23/1011116/chatbot-gpt3-openai-facebook-google-safety-fix-racist-sexist-language-ai/).

Hirschauer, Stefan. 2023. “Telling People Apart: Outline of a Theory of Human Differentiation.” Sociological Theory 41(4): 352–76. https://doi.org/10.1177/07352751231206411

Hoffman, Steve G. 2021. “A Story of Nimble Knowledge Production in an Era of Academic Capitalism.” Theory and Society 50(4): 541–75. https://doi.org/10.1007/s11186-020-09422-0

Hofmann, Valentin, Pratyusha Ria Kalluri, Dan Jurafsky, and Sharese King. 2024. “AI Generates Covertly Racist Decisions about People Based on Their Dialect.” Nature. https://doi.org/10.1038/s41586-024-07856-5

Huang, Haomiao. 2023. “How ChatGPT Turned Generative AI into an ‘Anything Tool.’” Ars Technica. Retrieved August 24, 2023 (https://arstechnica.com/ai/2023/08/how-chatgpt-turned-generative-ai-into-an-anything-tool/).

Jacobi, Tonja, and Matthew Sag. 2024. “We Are the AI Problem.” Emory Law Journal 74.

Johnson, Rebecca L., Giada Pistilli, Natalia Menédez-González, Leslye Denisse Dias Duran, Enrico Panai, Julija Kalpokiene, and Donald Jay Bertulfo. 2022. “The Ghost in the Machine Has an American Accent: Value Conflict in GPT-3.” Retrieved October 27, 2023 (http://arxiv.org/abs/2203.07785).

Joyce, Kelly, Laurel Smith-Doerr, Sharla Alegria, Susan Bell, Taylor Cruz, Steve G. Hoffman, Safiya Umoja Noble, and Benjamin Shestakofsky. 2021. “Toward a Sociology of Artificial Intelligence: A Call for Research on Inequalities and Structural Change.” Socius 7: 1–11. https://doi.org/10.1177/2378023121999581

Levy, Steven. 2023. “What OpenAI Really Wants.” WIRED. Retrieved October 20, 2023 (https://www.wired.com/story/what-openai-really-wants/).

Luccioni, Sasha, Giada Pistilli, Nazneen Rajani, Elizabeth Allendorf, Irene Solaiman, Nathan Lambert, and Margaret Mitchell. 2023. “Ethics and Society Newsletter #4: Bias in Text-to-Image Models.” Hugging Face. Retrieved July 11, 2023 (https://huggingface.co/blog/ethics-soc-4).

Markov, Todor, Chong Zhang, Sandhini Agarwal, Tyna Eloundou, Teddy Lee, Steven Adler, Angela Jiang, and Lilian Weng. 2023. “A Holistic Approach to Undesired Content Detection in the Real World.” Retrieved October 27, 2023 (http://arxiv.org/abs/2208.03274).

McCurry, Justin. 2021. “South Korean AI Chatbot Pulled from Facebook after Hate Speech towards Minorities.” The Guardian, January 14. Retrieved October 27, 2023 (https://www.theguardian.com/world/2021/jan/14/time-to-properly-socialise-hate-speech-ai-chatbot-pulled-from-facebook).

Mei, Katelyn, Sonia Fereidooni, and Aylin Caliskan. 2023. “Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks.” 2023 ACM Conference on Fairness, Accountability, and Transparency: 1699–1710. https://doi.org/10.1145/3593013.3594109

Metcalf, Jacob, Emanuel Moss, and danah boyd. 2019. “Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics.” Social Research: An International Quarterly 86(2): 449–76. https://doi.org/10.1353/sor.2019.0022

Miceli, Milagros, and Julian Posada. 2022. “The Data-Production Dispositif.” Proceedings of the ACM on Human-Computer Interaction 6 (CSCW2, Article 460): 1–37. https://doi.org/10.1145/3555561

Miceli, Milagros, Julian Posada, and Tianling Yang. 2022. “Studying Up Machine Learning Data: Why Talk About Bias When We Mean Power?” Proceedings of the ACM on Human-Computer Interaction 6 (GROUP, Article 34): 1–14. https://doi.org/10.1145/3492853

Miceli, Milagros, Martin Schuessler, and Tianling Yang. 2020. “Between Subjectivity and Imposition: Power Dynamics in Data Annotation for Computer Vision.” Proceedings of the ACM on Human-Computer Interaction 4 (CSCW2, Article 115): 1–25. https://doi.org/10.1145/3415186

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.

O’Neil, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York, N.Y.: Crown.

OpenAI. 2022. “[PUBLIC] InstructGPT: Final Labeling Instructions.” Google Docs. Retrieved August 30, 2023 (https://docs.google.com/document/d/1MJCqDNjzD04UbcnVZ-LmeXJ04-TKEICDAepXyMCBUb8/edit?usp=embed_facebook).

OpenAI. 2023a. “DALL·E 3 System Card.” Retrieved August 11, 2023 (https://cdn.openai.com/papers/DALL_E_3_System_Card.pdf).

OpenAI. 2023b. “GPT-4 System Card.” Retrieved August 11, 2023 (https://cdn.openai.com/papers/gpt-4-system-card.pdf).

OpenAI. 2024. “Model Spec.” Retrieved September 2, 2024 (https://cdn.openai.com/papers/gpt-4-system-card.pdf).

Ouyang, Long, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. “Training Language Models to Follow Instructions with Human Feedback.” Retrieved October 27, 2023 (https://arxiv.org/abs/2203.02155).

Ovalle, Anaelia, Palash Goyal, Jwala Dhamala, Zachary Jaggers, Kai-Wei Chang, Aram Galstyan, Richard Zemel, and Rahul Gupta. 2023. “‘I’m Fully Who I Am’: Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation.” 2023 ACM Conference on Fairness, Accountability, and Transparency: 1246–66. https://doi.org/10.48550/arXiv.2305.09941

Penn, Jonnie. 2018. “AI Thinks like a Corporation—and That’s Worrying.” The Economist, November 26. Retrieved October 27, 2023 (https://www.economist.com/open-future/2018/11/26/ai-thinks-like-a-corporation-and-thats-worrying).

Qi, Xiangyu, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson. 2023. “Fine-Tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!” Retrieved September 2, 2024 (https://doi.org/10.48550/arXiv.2310.03693).

r/ChatGPTJailbreak. n.d. Accessed January 9, 2024 (https://www.reddit.com/r/ChatGPTJailbreak/).

Rao, Abhinav, Sachin Vashistha, Atharva Naik, Somak Aditya, and Monojit Choudhury. 2023. “Tricking LLMs into Disobedience: Understanding, Analyzing, and Preventing Jailbreaks.” Retrieved October 27, 2023 (http://arxiv.org/abs/2305.14965).

Rogers, Reece. 2024. “Here’s How Generative AI Depicts Queer People.” Wired, April 2. Retrieved August 29, 2024 (https://www.wired.com/story/artificial-intelligence-lgbtq-representation-openai-sora/).

Rosanvallon, Pierre. 2013. The Society of Equals. Translated by Arthur Goldhammer. Cambridge, MA: Harvard University Press.

Roth, Emma. 2024. “Google Gemini Will Let You Create AI-Generated People Again.” The Verge. August 28. Retrieved August 28, 2024 (https://www.theverge.com/2024/8/28/24230445/google-gemini-create-ai-generated-people-imagen-3).

Sabbaghi, Shiva Omrani, Robert Wolfe, and Aylin Caliskan. 2023. “Evaluating Biased Attitude Associations of Language Models in an Intersectional Context.” Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society: 542–53. https://doi.org/10.1145/3600211.3604666

Sadowski, Jathan, and Mark Andrejevic. 2020. “More than a Few Bad Apps.” Nature Machine Intelligence 1–3. https://doi.org/10.1038/s42256-020-00246-2

Sasuke, Fujimoto, and Kazuhiro Takemoto. 2023. “Revisiting the Political Biases of ChatGPT.” Frontiers in Artificial Intelligence 6. https://doi.org/10.3389/frai.2023.1232003

Schwartz, Oscar. 2019. “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation.” IEEE Spectrum. Retrieved August 11, 2023 (https://spectrum.ieee.org/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation).

Scott, Mark, Gian Volpicelli, Mohar Chatterjee, Vincent Manancourt, Clothilde Goujard, and Brendan Bordelon. 2024. “Inside the Shadowy Global Battle to Tame the World’s Most Dangerous Technology.” POLITICO. March 26. Retrieved August 30, 2024 (https://www.politico.eu/article/ai-control-kamala-harris-nick-clegg-meta-big-tech-social-media/).

Shelby, Renee, Shalaleh Rismani, Kathryn Henne, AJung Moon, Negar Rostamzadeh, Paul Nicholas, N’Mah Yilla-Akbari, et al. 2023. “Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction.” Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society: 723–41. https://doi.org/10.1145/3600211.3604673

Simonite, Tom. 2021. “What Really Happened When Google Ousted Timnit Gebru.” WIRED, June 8. Retrieved October 13, 2023 (https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened).

Smith, Dorothy E. 2001. “Texts and the Ontology of Organizations and Institutions.” Studies in Cultures, Organizations & Societies 7(2): 159–98. https://doi.org/10.1080/10245280108523557

Smith, Dorothy E., and Susan Marie Turner. 2014. “Introduction.” Pp. 3–14 in Incorporating Texts into Institutional Ethnographies, edited by D. E. Smith and S. M. Turner. University of Toronto Press.

Snow, Olivia. 2022. “‘Magic Avatar’ App Lensa Generated Nudes From My Childhood Photos.” WIRED, December 7. Retrieved October 13, 2023 (https://www.wired.com/story/lensa-artificial-intelligence-csem/).

Steinhoff, James. 2021. “Industrializing Intelligence: A Political Economic History of the AI Industry.” Pp. 99–131 in Automation and Autonomy: Labour, Capital and Machines in the Artificial Intelligence Industry, Marx, Engels, and Marxisms, edited by J. Steinhoff. Cham: Springer International Publishing.

Steinhoff, James. 2023. “AI Ethics as Subordinated Innovation Network.” AI & SOCIETY. https://doi.org/10.1007/s00146-023-01658-5.

steven t. piantadosi [@spiantado]. 2022. “Yes, ChatGPT Is Amazing and Impressive. No, @OpenAI Has Not Come Close to Addressing the Problem of Bias. Filters Appear to Be Bypassed with Simple Tricks, and Superficially Masked. And What Is Lurking inside Is Egregious. @Abebab @sama Tw Racism, Sexism. Https://T.Co/V4fw1fY9dY.” Twitter. Retrieved August 7, 2023 (https://twitter.com/spiantado/status/1599462375887114240).

Stokel-Walker, Chris. 2023. “What Grok’s Recent OpenAI Snafu Teaches Us about LLM Model Collapse.” Fast Company. Retrieved December 14, 2023 (https://www.fastcompany.com/90998360/grok-openai-model-collapse).

Tan, Rebecca, and Regine Cabato. 2023. “Behind the AI Boom, an Army of Overseas Workers in ‘Digital Sweatshops.’” Washington Post. Retrieved October 22, 2023 (https://www.washingtonpost.com/world/2023/08/28/scale-ai-remotasks-philippines-artificial-intelligence/).

Tegmark, Max. 2017. Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Knopf.

Tiku, Nitasha, and Will Oremus. 2023. “The Right’s New Culture-War Target: ‘Woke AI.’” Washington Post, March 1. Retrieved September 1, 2024 (https://www.washingtonpost.com/technology/2023/02/24/woke-ai-chatgpt-culture-war/).

Touvron, Hugo, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. “Llama 2: Open Foundation and Fine-Tuned Chat Models.” Retrieved October 27, 2023 (http://arxiv.org/abs/2307.09288).

Verdegem, Pieter. 2022. “Dismantling AI Capitalism: The Commons as an Alternative to the Power Concentration of Big Tech.” AI & SOCIETY. https://doi.org/10.1007/s00146-022-01437-8.

Vincent, James. 2023. “Google Invested $300 Million in AI Firm Founded by Former OpenAI Researchers.” The Verge. Retrieved July 12, 2023 (https://www.theverge.com/2023/2/3/23584540/google-anthropic-investment-300-million-openai-chatgpt-rival-claude).

Widder, David Gray, Sarah West, and Meredith Whittaker. 2023. “Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI.” Retrieved October 27, 2023 (https://papers.ssrn.com/abstract=4543807).

Xu, Jing, Da Ju, Margaret Li, Y.-Lan Boureau, Jason Weston, and Emily Dinan. 2021. “Recipes for Safety in Open-Domain Chatbots.” Retrieved October 27, 2023 (http://arxiv.org/abs/2010.07079).

Article cover image

Downloads

Published

2025-05-30

Issue

Section

Research Articles

Similar Articles

<< < 1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.