Getting democracy wrong
How lessons from biotechnology can illuminate limits of the Asilomar AI principles
DOI:
https://doi.org/10.33621/jdsr.v6i440477Keywords:
principles, Asilomar, governance, Artificial intelligence, biotechnology, longtermismAbstract
Recent developments in large language models and computer automated systems more generally (colloquially called ‘artificial intelligence’) have given rise to concerns about potential social risks of AI. Of the numerous industry-driven principles put forth over the past decade to address these concerns, the Future of Life Institute’s Asilomar AI principles are particularly noteworthy given the large number of wealthy and powerful signatories. This paper highlights the need for critical examination of the Asilomar AI Principles. The Asilomar model, first developed for biotechnology, is frequently cited as a successful policy approach for promoting expert consensus and containing public controversy. Situating Asilomar AI principles in the context of a broader history of Asilomar approaches illuminates the limitations of scientific and industry self-regulation. The Asilomar AI process shapes AI’s publicity in three interconnected ways: as an agenda-setting manoeuvre to promote longtermist beliefs; as an approach to policy making that restricts public engagement; and as a mechanism to enhance industry control of AI governance.
References
ASOC (2010) The Asilomar Conference Recommendations on Principles for Research into Climate Engineering Techniques. Climate Institute. Washington, D.C. Available at: https://climateviewer.com/downloads/Asilomar-Conference-Report-2010.pdf (accessed December, 2023)
Altman S, G B and Sutskever I (2023) Governance of Superintelligence. In: Open AI. Available at: https://openai.com/blog/governance-of-superintelligence (accessed December 1, 2023).
Attard-Frost B, De los Ríos A and Walters DR (2023) The ethics of AI business practices: a review of 47 AI ethics guidelines. AI and Ethics 3(2): 389-406. https://doi.org/10.1007/s43681-022-00156-6
Baltimore D, Berg P, Botchan M, et al. (2015) A prudent path forward for genomic engineering and germline gene modification. Science 348(6230): 36-38. https://doi.org/10.12998/wjcc.v11.i3.528
Barinaga M (2000) Asilomar Revisited: Lessons for Today? Science (American Association for the Advancement of Science) 287(5458): 1584-1585. https://doi.org/10.1126/science.287.5458.1584
Barney D (2008) Politics and emerging media: The revenge of publicity. Global Media Journal -- Canadian Edition 1(1): 89-106.
Bender EM, Gebru T, McMillan-Major A, et al. (2021) On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, Canada, pp.610 - 623. Association for Computing Machinery.
Berg P (2004) Asilomar and recombinant DNA. Available at: https://www.nobelprize.org/prizes/chemistry/1980/berg/article/. (accessed December 1, 2023)
Berg P (2006) Brilliant science, dark politics, uncertain law. Jurimetrics 46(4): 379-389. http://www.jstor.org/stable/29762947
Berg P (2008) Asilomar 1975: DNA modification secured. Nature 455(7211): 290-291. https://doi.org/10.1038/455290a
Berg P, Baltimore D, Boyer HW, et al. (1974) Biohazards of Recombinant DNA. Science 185: 3034.
Berg P, Baltimore D, Brenner S, et al. (1975) Summary statement of the Asilomar Conference on recombinant DNA molecules. Proc. Nat. Acad. Sci 72: 1981-1984.
Buolamwini J. (2023) Unmasking AI: My mission to protect what is human in a world of machines. New York: Random House.
Bostrom N. (2014) Superintelligence: Paths, dangers, strategies. Oxford UK: Oxford University Press.
Capron A (2015) The lessons of Asilomar for today’s science. New York Times, May 28.
Capron A and Shapiro R (2001) Remember Asilomar? Re-examining science’s ethical and social responsibilities. Perspectives in Biology and Medicine 44(2): 162 - 169. https://doi.org/10.1353/pbm.2001.0022
Center for AI Safety (2023) AI Extinction Statement Press Release. Available at: www.safe.ai/press-release (accessed December 1, 2023).
Crary A (2023) The toxic ideology of longtermism. Radical Philosophy 214: 49 - 57.
Crawford K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press.
Dyer-Witheford N, Mikkola Kjosen A and Steinhoff J (2019) Inhuman Power: Artificial Intelligence and the Future of Capitalism. New York: Pluto Press.
Editorial (2015) After Asilomar. Nature 526(7573): 293-294. https://doi.org/10.1038/526293b
Eubanks V. (2017) Automating inequality: how high-tech tools profile, police, and punish the poor. New York: St. Martin’s press.
Ferber D (2004) Time for a Synthetic Biology Asilomar? Science 303(5655): 159-159. https://doi.org/10.1126/science.303.5655.159
Ferri G, Gloerich I. (2023) Risk and harm: Unpacking ideologies in the AI discourse. Proceedings of the 5th International conference on Conversational User Interfaces 23: 1 - 6
Fjeld J, Achten N, Hilligoss H, et al. (2020) Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI. Berkman Klein Center. January 15.
Future of Life Institute (FLI) Our mission. Available at: https://futureoflife.org/our-mission/ (accessed December 1, 2023).
Future of Life Institute (FLI) (2017) AI Principles. Available at: https://futureoflife.org/open-letter/ai-principles/ (accessed December 1, 2023).
Future of Life Institute (FLI) (2023) Pause giant AI experiments: An open letter. Available at: https://futureoflife.org/open-letter/pause-giant-ai-experiments/(accessed December 1, 2023).
Future of Life Institute (FLI) (n.d.) Beneficial AI 2017. Available at: https://futureoflife.org/event/bai-2017/ (accessed December 1, 2023).
Galaz V (2015) A manifesto for algorithms in the environment. Guardian, October 5. Available at: https://www.theguardian.com/science/political-science/2015/oct/05/a-manifesto-for-algorithms-in-the-environment. (accessed December 1, 2023)
Gebru T, Bender E, McMillan-Major A, et al. (2023) Statement from the listed authors of Stochastic Parrots on the “AI Pause” letter. DAIR. Available at: https://www.dair-institute.org/blog/letter-statement-March2023/ (accessed December 1, 2023).
Gebru T, Torres E. (2024). The TESCREL bundle: Eugenics and the promise of utopia through artificial general intelligence. First Monday 29 (4).
Gottweis H (2005) Transnationalizing recombinant-DNA regulation: Between Asilomar, EMBO, the OECD, and the European Community. Science as Culture 14: 325 - 338. https://doi.org/10.1080/09505430500369020
Grace K (2015) The Asilomar conference: A case study in risk mitigation. Technical report 2015–9. Machine Intelligence Research Institute. Berkeley, CA. Available at: https://intelligence.org/files/TheAsilomarConference.pdf
Gregorowius D, Biller-Andorno N and Deplazes-Zemp A (2017) The role of scientific self-regulation for the control of genome editing in the human germline: The lessons from the Asilomar and the Napa meetings show how self-regulation and public deliberation can lead to regulation of new biotechnologies. EMBO Rep 18(3): 355-358. https://doi.org/10.15252/embr.201643054
Heikkila M (2023). What’s changed since the ‘pause AI’ letter six months ago? MIT Technology Review September 26. Available at: https://www.technologyreview.com/2023/09/26/1080299/six-months-on-from-the-pause-letter/. (accessed August 25, 2024).
Hogan M (2015) Data flows and water woes: The Utah Data Center. Big Data & Society (July–December): 1–12 https://doi.org/10.1177/2053951715592429
Hurlbut JB (2015a) Limits of Responsibility: Genome Editing, Asilomar, and the Politics of Deliberation. Hastings Center Report 45(5): 11-14.
Hurlbut JB (2015b) Remembering the future: Science, law, and the legacy of Asilomar. In: Jasanoff S and Kim S-H (eds) Dreamscapes of Moderity: Sociotechnical imaginaries and the fabrication of power. Chicago, IL: University of Chicago Press, pp.126 - 151.
Jackson DA, Symons RH and Berg P (1972) Biochemical method for inserting new genetic information into DNA of Simian Virus 40: circular SV40 DNA molecules containing lambda phage genes and the galactose operon of Escherichia coli. Proc Natl Acad Sci U S A 69(10): 2904-2909.
Jasanoff S and Hurlbut JB (2018) A global observatory for gene editing. Nature 555: 435-437.
Jasanoff S, Hurlbut JB and Saha K (2019) Democratic Governance of Human Germline Genome Editing. Crispr j 2(5): 266-271. https://doi.org/10.1089/crispr.2019.0047
Jobin A, Ienca M and Vayena E (2019) The global landscape of AI ethics guidelines. Nature Machine Intelligence 1(9): 389-399. https://doi.org/10.1038/s42256-019-0088-2
Krimsky S (1982) Genetic Alchemy: The Social History of the Recombinant DNA Controversy. Cambridge, Mas: MIT Press.
Labaree A (2014) Our science-fiction apocalypse: Meet the scientists trying to predict the end of the world. Salon, May10. Available at: https://www.cser.ac.uk/news/our-science-fiction-apocalypse-meet-scientists-try/ (accessed August 25, 2024).
Lewis-Kraus, G. (2016) The great A.I. awakening. The New York Times Magazine. Dec. 14. Available at: https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html (accessed August 25, 2024)
MacAskill W (2022) What we owe the future. New York: Basic Books.
Marchant G, Tournas L and Gutierrez CI (2020) Governing new technologies through soft law: Lessons for artificial intelligence. Jurimetrics 61(1): 1 - 18.
McKelvey F and Roberge J (2023) Recursive Power: AI Governmentality and Technofutures. In: Lindgren S (ed) Handbook of Critical Studies of Artificial Intelligence. London: Elgar, pp.21 - 32.
Milmo D (2023) Google, Microsoft, Open AI and start up form body to regulate AI development. The Guardian, July 16. Available at: https://www.theguardian.com/technology/2023/jul/26/google-microsoft-openai-anthropic-ai-frontier-model-forum (accessed August 25, 2024).
Mittelstadt B (2019) Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1, 501–507
Munn L (2023) The uselessness of AI ethics. AI and Ethics 3(3): 869-877. https://doi.org/10.1007/s43681-022-00209-w
Naughton J. (2022) Longtermism: how good intentions and the rich created a dangerous creed. The Guardian Dec. 4. Available at: https://www.theguardian.com/technology/commentisfree/2022/dec/04/longtermism-rich-effective-altruism-tech-dangerous. (accessed August 25, 2024).
Noble S. (2018). Algorithms of oppression: How search engines reinforce racism. New York: NYU Press.
Parthasarathy S (2015) Governance Lessons for CRISPR/Cas9 from the Missed Opportunities of Asilomar. Ethics in Biology, Engineering and Medicine: An International Journal 6: 305-312.
Prabhakaran V, Mitchell M, Gebru T, Gabriel J. 2022. A human rights-based approach to responsible AI. arXiv: 2210.02667 https://doi.org/10. 48550/arXiv.2210.02667
Richards B, Aguera Y, Arcas B, et al. (2023) The illusion of AI’s Existential Risk. Noema, July 18. Available at: https://www.noemamag.com/the-illusion-of-ais-existential-risk/ (accessed August 25, 2024).
Rogers M (1975) The pandora’s box congress. Rolling Stone, June 19. Available at: https://web.mit.edu/endy/www/readings/RollingStone(189)37.pdf. (accessed December 1, 2023)
Rufo F and Ficorilli A (2019) From Asilomar to Genome Editing: Research Ethics and Models of Decision. NanoEthics 13(3): 223-232.
Schiffer Z (2021) Google fires second ethics researcher following internal investigation. The Verge, February 19. Available at: https://www.theverge.com/2021/2/19/22292011/google-second-ethical-ai-researcher-fired. (accessed December 1, 2023)
Singer M and Soll D (1973) Guidelines for hybrid DNA molecules. Science 181: 1114.
Stapf-Fine H, Bartosch U, Bauberger S, et al. (2018) Policy Paper on the Asilomar Principles on Artificial Intelligence. June. Berlin: Federation of German Scientists. Available at: https://vdw-ev.de/wp-content/uploads/2019/05/Policy-Paper-on-the-Asilomar-principles-on-Artificial-Inteligence_end.pdf (accessed December 1, 2023)
Stirling B (2018) The Asilomar AI Principles. Wired, August 13. Available at: https://www.wired.com/beyond-the-beyond/2018/06/asilomar-ai-principles/ (accessed December 1, 2023)
Taylor C and Dewsbury B (2019) Barriers to inclusive deliberation and democratic governance of genetic technologies at the science-policy interface. Journal of Science Communication 18(3): Y02. https://doi.org/10.22323/2.18030402
Torres É (2021) Against Longtermism. Aeon. Available at: https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo. (accessed December 1, 2023)
Wagner B (2018) Ethics As An Escape From Regulation.: From “Ethics-washing” To Ethics-shopping? In: Bayamlioğlu E, Baraliuc I, anssens J, et al. (eds) Being Profiled. Amsterdam: Amsterdam University Press, pp.84–89.
Williams A, Miceli M, and Gebru T. (2022) The exploited labor behind artificial intelligence. Noema. https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/
Zuboff S (2019) The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York: The Hatchett Group.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Gwendolyn Blue, Mél Hogan
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.