AI policymaking as drama

Stages, roles, and ghosts in AI governance in the United Kingdom and Canada

Authors

  • Alison Powell
  • Fenwick McKelvey Concordia University, Canada

DOI:

https://doi.org/10.33621/jdsr.v6i440468

Keywords:

policy, drama, AI governance, Canada, United Kingdom, critical policy studies, hauntology

Abstract

As two researchers faced with the prospect of still more knowledge mobilisation, and still more consultation, our manuscript critically reflects on strategies for engaging with consultations as critical questions in critical AI studies. Our intervention reflects on the often-ambivalent roles of researchers and ‘experts’ in the production, contestation, and transformation of consultations and the publicities therein concerning AI. Although ‘AI’ is increasingly becoming a marketing term, there are still substantive strategic efforts toward developing AI industries. These policy consultations do open opportunities for experts like the authors to contribute to public discourse and policy practice on AI. Regardless, in the process of negotiating and developing around these initiatives, a range of dominant publicities emerge, including inevitability and hype. We draw on our experiences contributing to AI policy-making processes in two Global North countries. Resurfacing long-standing critical questions about participation in policymaking, our manuscript reflects on the possibilities of critical scholarship faced with the uncertainty in the rhetoric of democracy and public engagement. 

References

Acemoglu, D. (2021). Harms of AI (Working Paper No. 29247). National Bureau of Economic Research. https://doi.org/10.3386/w29247

AI Fringe. (2023). AI Fringe. https://aifringe.org/

Allen, R., & Masters, D. (2024, April 18). Artificial Intelligence (Regulation and Employment Rights) Bill. https://www.tuc.org.uk/research-analysis/reports/artificial-intelligence-regulation-and-employment-rights-bill

Amoore, L. (2022). Machine learning political orders. Review of International Studies, 1–17. https://doi.org/10.1017/S0260210522000031

Angwin, J., Larson, J., Kirchner, L., & Mattu, S. (2016, May 23). Machine Bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Bacchi, C. L. (2009). Analysing Policy: What’s the Problem Represented to Be? Pearson.

Bareis, J., & Katzenbach, C. (2021). Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics. Science, Technology, & Human Values, 01622439211030007. https://doi.org/10.1177/01622439211030007

Barron, D. (2024, June 3). A radically inclusive approach to digital society. Promising Trouble. https://www.promisingtrouble.net/blog/a-radically-inclusive-approach-to-digital-society

Bates, J. (2013). The Domestication of Open Government Data Advocacy in the United Kingdom: A Neo-Gramscian Analysis. Policy & Internet, 5(1), 118–137. https://doi.org/10.1002/poi3.25

Belcourt, B.-R. (2016). A poltergeist manifesto. Feral Feminisms, 6, 22–32.

Bill C‑27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts.: Hearing before the Standing Committee on Industry and Technology, House of Commons of Canada 1 (2023). https://www.ourcommons.ca/DocumentViewer/en/44-1/INDU/meeting-86/evidence

Birch, K. (2017). Techno-economic Assumptions. Science as Culture, 26(4), 433–444. https://doi.org/10.1080/09505431.2017.1377389

Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html

Burke, K. (1945). Grammar of motives. Prentice-Hall. http://archive.org/details/grammarofmotives1945burk

Büthe, T., Djeffal, C., Lütge, C., Maasen, S., & Ingersleben-Seip, N. von. (2022). Governing AI – attempting to herd cats? Introduction to the special issue on the Governance of Artificial Intelligence. Journal of European Public Policy, 29(11), 1721–1752. https://doi.org/10.1080/13501763.2022.2126515

Castaldo, J. (2023, December 10). Federal consultations on AI regulations heavily skewed toward businesses, industry groups, say critics. The Globe and Mail. https://www.theglobeandmail.com/business/article-canada-ai-law/

Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180080. https://doi.org/10.1098/rsta.2018.0080

Chander, S. D., & Nation, S. (2021, June 24). Who gets to write the future? Ada Lovelace Institute. https://www.adalovelaceinstitute.org/blog/who-write-future/

Clarke, J., Bainton, D., Lendvai, N., & Stubbs, P. (2015). Making policy move: Towards a politics of translation and assemblage (1st ed.). Bristol University Press; JSTOR. https://doi.org/10.2307/j.ctt1t89fpb

Coleman, S. (2013). How to Make a Drama Out of a Crisis. Political Studies Review, 11(3), 328–335. https://doi.org/10.1111/1478-9302.12024

Dandurand, G., McKelvey, F., & Roberge, J. (2023). Freezing out: Legacy media’s shaping of AI as a cold controversy. Big Data & Society, 10(2), 20539517231219242. https://doi.org/10.1177/20539517231219242

Derrida, J. (1994). Specters of Marx: The State of the Debt, the Work of Mourning, and the New International. Psychology Press.

European Union. (2019, April 8). Ethics guidelines for trustworthy AI | Shaping Europe’s digital future. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

Fairclough, I., & Fairclough, N. (2013). Political Discourse Analysis: A Method for Advanced Students. Routledge.

Falk, S., & van Wynsberghe, A. (2023). Challenging AI for Sustainability: What ought it mean? AI and Ethics. https://doi.org/10.1007/s43681-023-00323-3

Fisher, M. (2012). What Is Hauntology? Film Quarterly, 66(1), 16–24. https://doi.org/10.1525/fq.2012.66.1.16

Floridi, L. (2014). The Fourth Revolution: How the Infosphere is Reshaping Human Reality. Oxford University Press.

Fourcade, M., & Gordon, J. (2020). Learning Like a State: Statecraft in the Digital Age. Journal of Law and Political Economy, 1(1). https://doi.org/10.5070/LP61150258

Fricker, M. (2007). Epistemic Injustice: Power and the Ethics of Knowing. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198237907.001.0001

Fung, B. (2023a, May 16). Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman testifies before Congress on AI risks | CNN Business. CNN. https://www.cnn.com/2023/05/16/tech/sam-altman-openai-congress/index.html

Fung, B. (2023b, May 17). How the CEO behind ChatGPT won over Congress | CNN Business. CNN. https://www.cnn.com/2023/05/17/tech/sam-altman-congress/index.html

Gansky, B., & McDonald, S. (2022). CounterFAccTual: How FAccT Undermines Its Organizing Principles. 2022 ACM Conference on Fairness, Accountability, and Transparency, 1982–1992. https://doi.org/10.1145/3531146.3533241

Gasser, U., & Almeida, V. A. F. (2017). A Layered Model for AI Governance. IEEE Internet Computing, 21(6), 58–62. https://doi.org/10.1109/MIC.2017.4180835

Gebru, T., & Torres, É. P. (2024). The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence. First Monday. https://doi.org/10.5210/fm.v29i4.13636

Goldenfein, J., & Mann, M. (2023). Tech money in civil society: Whose interests do digital rights organisations represent? Cultural Studies, 37(1), 88–122. https://doi.org/10.1080/09502386.2022.2042582

Gong, S., Liu, X., & Jain, A. K. (2020). Mitigating Face Recognition Bias via Group Adaptive Classifier (No. arXiv:2006.07576). arXiv. https://doi.org/10.48550/arXiv.2006.07576

Groves, L., Metcalf, J., Kennedy, A., Vecchione, B., & Strait, A. (2024). Auditing Work: Exploring the New York City algorithmic bias audit regime (No. arXiv:2402.08101). arXiv. https://doi.org/10.48550/arXiv.2402.08101

Hajer, M. A. (2009). Authoritative Governance: Policy Making in the Age of Mediatization. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199281671.001.0001

Hansen, S. S. (2021). Public AI imaginaries: How the debate on artificial intelligence was covered in Danish newspapers and magazines 1956–2021. Nordicom Review, 43(1), 56–78. https://doi.org/10.2478/nor-2022-0004

Hawes, B., & Hall, D. W. (2023, December). After the Summit: Progress in public policy on AI [Monograph]. University of Southampton. https://doi.org/10.5258/SOTON/WSI-WP007

Hendrycks, D., & Mazeika, M. (2022). X-Risk Analysis for AI Research (No. arXiv:2206.05862). arXiv. https://doi.org/10.48550/arXiv.2206.05862

Hess, D. J. (2014). When Green Became Blue: Epistemic Rift and the Corralling of Climate Science. In Fields of Knowledge: Science, Politics and Publics in the Neoliberal Age (Vol. 27, pp. 123–153). Emerald Group Publishing Limited. https://doi.org/10.1108/S0198-871920140000027012

Hickman, L., & Serlin, D. (2018). Towards a crip methodology for critical disability studies. In Interdisciplinary Approaches to Disability. Routledge.

Hogan, M. (2020). The Data Center Industrial Complex (2021). Saturation: An Elemental Politics (Duke University Press). https://www.academia.edu/39043972/The_Data_Center_Industrial_Complex_2021_

Irani, L. (2015). The cultural work of microwork. New Media & Society, 17(5), 720–739. https://doi.org/10.1177/1461444813511926

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), Article 9. https://doi.org/10.1038/s42256-019-0088-2

Jones, M., & McKelvey, F. (2024). Deconstructing public participation in the governance of facial recognition technologies in Canada. AI & SOCIETY. https://doi.org/10.1007/s00146-024-01952-w

JUST-AI. (2023). Prototyping Ethical Futures Series: 2021 Convening on Futures. JUST AI. https://www.just-ai.net/copy-of-2021-convening-futures

Katz, Y. (2020). Artificial Whiteness: Politics and Ideology in Artificial Intelligence. In Artificial Whiteness. Columbia University Press. https://doi.org/10.7312/katz19490

Kearnes, M., & Wynne, B. (2007). On Nanotechnology and Ambivalence: The Politics of Enthusiasm. NanoEthics, 1(2), 131–142. https://doi.org/10.1007/s11569-007-0014-7

Knight, S., Shibani, A., & Vincent, N. (2024). Ethical AI governance: Mapping a research ecosystem. AI and Ethics. https://doi.org/10.1007/s43681-023-00416-z

Lasswell, H. D. (1971). A pre-view of policy sciences. American Elsevier Publishing Company. http://archive.org/details/previewofpolicys0000lass

Lea, T. (2020). Wild policy: Indigeneity and the unruly logics of intervention. Stanford University Press.

Li, F., Kakhki, A. M., Choffnes, D., Gill, P., & Mislove, A. (2016, 16). Classifiers Unclassified: An Efficient Approach to Revealing IP Traffic Classification Rules. Proceedings of the 16th ACM Internet Measurement Conference. Internet Measurement Conference, Santa Monica, CA. http://david.choffnes.com/pubs/ClassifiersUnclassified-IMC16.pdf

Lipsky, M. (2010). Street-Level Bureaucracy: Dilemmas of the Individual in Public Service, 30th Anniversary Expanded Edition (30th Anniversary, Expanded ed. edition). Russell Sage Foundation.

Luka, M. E., Harvey, A., Hogan, M., Shepherd, T., & Zeffiro, A. (2015). Scholarship as Cultural Production in the Neoliberal University: Working Within and Against ‘Deliverables.’ Studies in Social Justice, 9(2), Article 2. https://doi.org/10.26522/ssj.v9i2.1138

Marres, N. (2005). Issues Spark a Public into Being: A Key But Often Forgotten Point of the Lippmann-Dewey Debate (B. Latour & P. Weibel, Eds.; pp. 208–217). MIT Press.

McKelvey, F. (2018). Internet daemons: Digital communications possessed. University of Minnesota Press.

McKelvey, F., & Roberge, J. (2023). Recursive Power. In S. Lindgren (Ed.), Handbook of critical studies of artificial intelligence (pp. 21–32). Edward Elgar Publishing.

Menn, J., & Nix, N. (2023, December 7). Big Tech funds the very people who are supposed to hold it accountable. Washington Post. https://www.washingtonpost.com/technology/2023/12/06/academic-research-meta-google-university-influence/

Metz, R. (2021, March 24). Google offered a professor $60,000, but he turned it down. Here’s why. CNN. https://www.cnn.com/2021/03/24/tech/google-ai-ethics-reputation/index.html

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2). https://doi.org/10.1177/2053951716679679

Morley, J., Kinsey, L., Elhalal, A., Garcia, F., Ziosi, M., & Floridi, L. (2023). Operationalising AI ethics: Barriers, enablers and next steps. AI & SOCIETY, 38(1), 411–423. https://doi.org/10.1007/s00146-021-01308-8

Nagy, P., & Neff, G. (2024). Conjuring algorithms: Understanding the tech industry as stage magicians. New Media & Society, 26(9), 4938–4954. https://doi.org/10.1177/14614448241251789

Naiseh, M., Bentley, C., Ramchurn, S., Williams, E., Awad, E., & Alix, C. (2022). Methods, Tools and Techniques for Trustworthy Autonomous Systems (TAS) Design and Development. Companion of the 2022 ACM SIGCHI Symposium on Engineering Interactive Computing Systems, 66–69. https://doi.org/10.1145/3531706.3536459

Newman-Griffis, D., Rauchberg, J. S., Alharbi, R., Hickman, L., & Hochheiser, H. (2023). Definition drives design: Disability models and mechanisms of bias in AI technologies. First Monday. https://doi.org/10.5210/fm.v28i1.12903

Owen, R., Stilgoe, J., Macnaghten, P., Gorman, M., Fisher, E., & Guston, D. (2013). A Framework for Responsible Innovation. In Responsible Innovation (pp. 27–50). John Wiley & Sons, Ltd. https://doi.org/10.1002/9781118551424.ch2

Palmås, K., & Surber, N. (2022). Legitimation crisis in contemporary technoscientific capitalism. Journal of Cultural Economy, 15(3), 373–379. https://doi.org/10.1080/17530350.2022.2065331

Paul, R. (2022). Can critical policy studies outsmart AI? Research agenda on artificial intelligence technologies and public policy. Critical Policy Studies, 16(4), 497–509. https://doi.org/10.1080/19460171.2022.2123018

Readings, B. (1997). The University in Ruins. Harvard University Press.

Sætra, H. S., Coeckelbergh, M., & Danaher, J. (2022). The AI ethicist’s dilemma: Fighting Big Tech by supporting Big Tech. AI and Ethics, 2(1), 15–27. https://doi.org/10.1007/s43681-021-00123-7

Schneiders, E., Chamberlain, A., Fischer, J. E., Benford, S., Castle-Green, S., Ngo, V., Kucukyilmaz, A., Barnard, P., Row Farr, J., Adams, M., Tandavanitj, N., Devlin, K., Mancini, C., & Mills, D. (2023). TAS for Cats: An Artist-led Exploration of Trustworthy Autonomous Systems for Companion Animals. Proceedings of the First International Symposium on Trustworthy Autonomous Systems, 1–5. https://doi.org/10.1145/3597512.3597517

Schön, D. A., & Rein, M. (1994). Frame reflection: Toward the resolution of intractable policy controversies. New York : BasicBooks. http://archive.org/details/framereflectiont00dona

Simpson, J. (2024, May 4). Capacity crunch: Why the UK doesn’t have the power to solve the housing crisis. The Observer. https://www.theguardian.com/business/article/2024/may/04/capacity-crunch-why-the-uk-doesnt-have-the-power-to-solve-housing-crisis

Sloane, M., Moss, E., Awomolo, O., & Forlano, L. (2022). Participation Is not a Design Fix for Machine Learning. Proceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, 1–6. https://doi.org/10.1145/3551624.3555285

Sloane, M., Moss, E., & Chowdhury, R. (2022). A Silicon Valley love triangle: Hiring algorithms, pseudo-science, and the quest for auditability. Patterns, 3(2), 100425. https://doi.org/10.1016/j.patter.2021.100425

Stark, L., & Hoffmann, A. L. (2019). Data Is the New What? Popular Metaphors & Professional Ethics in Emerging Data Culture. Journal of Cultural Analytics. https://doi.org/10.22148/16.036

Stilgoe, J. (2018). Machine learning, social learning and the governance of self-driving cars. Social Studies of Science, 48(1), 25–56. https://doi.org/10.1177/0306312717741687

Stilgoe, J. (2019). Who’s Driving Innovation?: New Technologies and the Collaborative State. Springer International Publishing.

Stone, D. (1997). Policy paradox: The art of political decision making (Revised Edition). New York : W.W. Norton & Co. http://archive.org/details/policyparadoxart0000ston

Taylor, L., & Dencik, L. (2020). Constructing Commercial Data Ethics. Technology and Regulation, 2020, 1–10. https://doi.org/10.26116/techreg.2020.001

Tharoor, I. (2023, July 26). Analysis | The world reckons with a new ‘Oppenheimer moment.’ Washington Post. https://www.washingtonpost.com/world/2023/07/26/oppenheimer-nolan-ai-artificial-intelligence/

Tuck, E., & Ree, C. (2013). A Glossary of Haunting. In S. H. Jones, T. E. Adams, & C. Ellis (Eds.), Handbook of autoethnography (pp. 639–657). Left Coast Press.

Turchin, A., & Denkenberger, D. (2020). Classification of global catastrophic risks connected with artificial intelligence. AI & SOCIETY, 35(1), 147–163. https://doi.org/10.1007/s00146-018-0845-5

Turnbull, N. (2008). Harold lasswell’s “problem orientation” for the policy sciences. Critical Policy Studies, 2(1), 72–91. https://doi.org/10.1080/19460171.2008.9518532

Turner, R. H. (2001). Role Theory. In J. H. Turner (Ed.), Handbook of Sociological Theory (pp. 233–254). Springer US. https://doi.org/10.1007/0-387-36274-6_12

UK Government. (2019, June 10). A guide to using artificial intelligence in the public sector. GOV.UK. https://www.gov.uk/government/collections/a-guide-to-using-artificial-intelligence-in-the-public-sector

UK Government. (2024, February 6). A pro-innovation approach to AI regulation: Government response. GOV.UK. https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response

van Hulst, M., & Yanow, D. (2016). From Policy “Frames” to “Framing”: Theorizing a More Dynamic, Political Approach. The American Review of Public Administration, 46(1), 92–112. https://doi.org/10.1177/0275074014533142

Vicente, P. N., & Dias-Trindade, S. (2021). Reframing sociotechnical imaginaries: The case of the Fourth Industrial Revolution. Public Understanding of Science (Bristol, England), 30(6), 708–723. https://doi.org/10.1177/09636625211013513

Whittaker, M. (2021). The steep cost of capture. Interactions, 28(6), 50–55. https://doi.org/10.1145/3488666

Widder, D. G. (2024). Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints. The 2024 ACM Conference on Fairness, Accountability, and Transparency, 1295–1304. https://doi.org/10.1145/3630106.3658973

Yeung, K. (2018). Algorithmic regulation: A critical interrogation. Regulation & Governance, 12(4), 505–523. https://doi.org/10.1111/rego.12158

Article cover image

Downloads

Published

2024-12-31

Issue

Section

Research Articles

Most read articles by the same author(s)

Similar Articles

<< < 2 3 4 5 6 7 8 9 10 11 > >> 

You may also start an advanced similarity search for this article.