Strategic misrecognition and speculative rituals in generative AI

Authors

  • Sun-ha Hong Simon Fraser University

DOI:

https://doi.org/10.33621/jdsr.v6i440474

Keywords:

generative AI, machine intelligence, agency, ritual, spectacle, history of AI

Abstract

Public conversation around generative AI is saturated with the ‘realness question’: is the software really intelligent? At what point could we say it is thinking? I argue that attempts to define and measure those thresholdsmisses the fire for the smoke. The primary societal impact of realness question comes not from the constantly deferred sentient machine of the future, but its present form as rituals of misrecognition. Persistent confusion between plausible textual output and internal cognitive processes, or the use of mystifying language like ‘learning’ and ‘hallucination’, configure public expectations around what kinds of politics and ethics of genAI are reasonable or plausible. I adapt the notion of abductive agency, originally developed by the anthropologist Alfred Gell, to explain how such misrecognition strategically defines the terms of the AI conversation.

I further argue that such strategic misrecognition is not new or accidental, but a central tradition in the social history of computing and artificial intelligence. This tradition runs through the originary deception of the Turing Test, famously never intended as a rigorous test of artificial intelligence, to the present array of drama and public spectacle in the form of competitions, demonstrations and product launches. The primary impact of this tradition is not to progressively clarify the nature of machine intelligence, but to constantly redefine values like intelligence in order to legitimise and mythologise our newest machines – and their increasingly wealthy and powerful owners.   

References

Abram, S. (2005). Introduction: Science/technology as politics by other means. Focaal—European Journal of Anthropology, 45, 3–20.

Andreessen, M. (2023, June 6). Why AI Will Save the World. Andreessen Horowitz. https://a16z.com/ai-will-save-the-world/

Beardon, C. (1994). Computers, postmodernism and the culture of the artificial. AI & Society, 8(1), 1–16. https://doi.org/10.1007/BF02065174

Bender, E. (2022). On NYT Magazine on AI: Resist the Urge to be Impressed. Medium. https://medium.com/@emilymenonbender/on-nyt-magazine-on-ai-resist-the-urge-to-be-impressed-3d92fd9a0edd

Bender, E. M. (2023, July 6). Talking about a ‘schism’ is ahistorical. Medium. https://medium.com/@emilymenonbender/talking-about-a-schism-is-ahistorical-3c454a77220f

Bender, E. M., & Fiesler, C. (2023, March 1). The Limitations of ChatGPT with Emily M. Bender and Casey Fiesler. The Radical AI Podcast. https://www.radicalai.org/chatgpt-limitations

Boltanski, L., & Chiapello, E. (2007). The New Spirit of Capitalism. Verso.

Bringsjord, S., Bello, P., & Ferrucci, D. (2003). Creativity, the Turing Test, and the (Better) Lovelace Test. In J. H. Moor (Ed.), The Turing Test: The Elusive Standard of Artificial Intelligence (pp. 215–239). Springer Netherlands. https://doi.org/10.1007/978-94-010-0105-2_12

Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S., Nori, H., Palangi, H., Ribeiro, M. T., & Zhang, Y. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4 (arXiv:2303.12712). arXiv. https://doi.org/10.48550/arXiv.2303.12712

Chakravartty, P. (2004). Telecom, National Development and the Indian State: A Postcolonial Critique. Media, Culture & Society, 26(2), 227–249. https://doi.org/10.1177/016344370404117

Cheney-Lippold, J. (2017). We are Data—Algorithms and the making of our digital selves. New York University Press.

Cheney-Lippold, J. (2024). The silicon future. New Media & Society, OnlineFirst.

https://doi.org/10.1177/14614448241234864

Chua, L., & Elliott, M. (Eds.). (2013). Distributed objects: Meaning and mattering after Alfred Gell. Berghahn.

Coldicutt, R. (2023, June 1). On Understanding Power and Technology. Medium. https://rachelcoldicutt.medium.com/on-understanding-power-and-technology-1345dc57a1a

Corsín Jiménez, A. (2014). Introduction: The prototype: More than many and less than one. Journal of Cultural Economy, 7(4), 381–398. https://doi.org/10.1080/17530350.2013.858059

Dick, S. (2019). Artificial Intelligence. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.92fe150c

Dinerstein, J. (2006). Technology and Its Discontents: On the Verge of the Posthuman. American Quarterly, 58(3), 569–595. http://dx.doi.org/10.1353/aq.2006.0056

Dzieza, J. (2023, June 20). Inside the AI Factory. The Verge. https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots

Erickson, P., Klein, J. L., Daston, L., Lemov, R., Sturm, T., & Gordin, M. D. (2013). How Reason Almost Lost Its Mind. University of Chicago Press.

Floridi, L., Taddeo, M., & Turilli, M. (2009). Turing’s Imitation Game: Still an Impossible Challenge for All Machines and Some Judges––An Evaluation of the 2008 Loebner Contest. Minds and Machines, 19(1), 145–150. https://doi.org/10.1007/s11023-008-9130-6

Foster, H. (1991). Exquisite Corpses. Visual Anthropology Review, 7(1), 51–61. https://doi.org/10.1525/var.1991.7.1.51

Gebru, T., Bender, E. M., McMillan-Major, A., & Mitchell, M. (2023, March 31). Statement from the listed authors of Stochastic Parrots on the “AI pause” letter. Distributed AI Research Institute. https://www.dair-institute.org/blog/letter-statement-March2023/

Gebru, T., & Torres, É. (2024). The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence. First Monday, 29(4). https://doi.org/10.5210/fm.v29i4.13636

Gell, A. (1998). Art and Agency: An Anthropological Theory. Clarendon Press.

Geoghegan, B. D. (2020). Orientalism and Informatics: Alterity from the Chess-Playing Turk to Amazon’s Mechanical Turk. Ex-Position, 43, 45.

Goldenfein, J., Green, B., & Viljoen, S. (2020, April 17). Privacy Versus Health Is a False Trade-Off. Jacobin. https://jacobinmag.com/2020/4/privacy-health-surveillance-coronavirus-pandemic-technology

Halpern, O., & Mitchell, R. (2022). The smartness mandate. MIT Press.

Hayles, K. N. (2017). Unthought: The Power of the Cognitive Nonconscious. University of Chicago Press.

Heaven, W. D. (2023, August 30). AI hype is built on high test scores. Those tests are flawed. MIT Technology Review. https://www.technologyreview.com/2023/08/30/1078670/large-language-models-arent-people-lets-stop-testing-them-like-they-were/

Hodges, A. (2009). Alan Turing and the Turing Test. In R. Epstein, G. Roberts, & G. Beber (Eds.), Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer (pp. 13–22). Springer Netherlands. https://doi.org/10.1007/978-1-4020-6710-5_2

Hong, S. (2022). Predictions Without Futures. History and Theory, 61(3), 371–390. https://doi.org/10.1111/hith.12232

Irani, L. (2015). The cultural work of microwork. New Media & Society, 17(5), 720–739. https://doi.org/10.1177/1461444813511926

Irani, L. (2018). “Design Thinking”: Defending Silicon Valley at the Apex of Global Labor Hierarchies. Catalyst: Feminism, Theory, Technoscience, 4(1), Article 1. https://doi.org/10.28968/cftt.v4i1.29638

Jiang, H. H., Brown, L., Cheng, J., Khan, M., Gupta, A., Workman, D., Hanna, A., Flowers, J., & Gebru, T. (2023). AI Art and its Impact on Artists. Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 363–374. http://dx.doi.org/10.1145/3600211.3604681

Johnson, K. (2022, June 14). LaMDA and the Sentient AI Trap. Wired. https://www.wired.com/story/lamda-sentient-ai-bias-google-blake-lemoine/

Kampmann, D. (2024). Venture capital, the fetish of artificial intelligence, and the contradictions of making intangible assets. Economy and Society, 0(0), 1–28. https://doi.org/10.1080/03085147.2023.2294602

Kang, M. (2011). Sublime Dreams of Living Machines: The Automaton in the European Imagination. Harvard University Press.

Katz, Y. (2020). Artificial whiteness: Politics and ideology in artificial intelligence. Columbia University Press.

Kittler, F. A. (1986). Gramophone, Film, Typewriter (G. Winthrop-Young & M. Wutz, Trans.). Stanford University Press.

Kneese, T. (2015, October). Spiritual Machines: Transhumanist Temporalities and Startup Culture. Humanism and Its Prefixes.

Latour, B. (1988). The Pasteurization of France. In Practical Synthetic Organic Chemistry: Reactions, Principles, and Techniques. Harvard University Press.

Lepselter, S. (2016). The Resonance of Unseen Things: Poetics, Power, Captivity, and UFOs in the American Uncanny. University of Michigan Press.

Lyotard, J.-F. (1984). The Postmodern Condition: A Report on Knowledge. University of Minnesota Press.

Marcus, G. (2014, June 9). What Comes After the Turing Test? The New Yorker. https://www.newyorker.com/tech/annals-of-technology/what-comes-after-the-turing-test

Marcus, G. (2022, December 25). What to Expect When You’re Expecting … GPT-4 [Substack]. Marcus on AI. https://garymarcus.substack.com/p/what-to-expect-when-youre-expecting

Martínez, E. (2024). Re-evaluating GPT-4’s bar exam performance. Artificial Intelligence and Law. https://doi.org/10.1007/s10506-024-09396-9

Marvin, C. (1988). When Old Technologies Were New: Thinking About Electric Communication in the Late Nineteenth Century. Oxford University Press.

McKelvey, F., & Neves, J. (2021). Introduction: Optimization and its discontents. Review of Communication, 21(2), 95–112. https://doi.org/10.1080/15358593.2021.1936143

McKelvey, F., & Roberge, J. (2023). Recursive power: AI governmentality and technofutures. In Handbook of Critical Studies of Artificial Intelligence (pp. 21–32). Edward Elgar Publishing.

Metz, C. (2023, May 1). ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead. The New York Times. https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html

Mitchell, M. (2024). The Turing Test and our shifting conceptions of intelligence. Science, 385, eaqd9356.

Moor, J. H. (2001). The Status and Future of the Turing Test. Minds and Machines, 11(1), 77–93. https://doi.org/10.1023/A:1011218925467

Natale, S. (2021). Deceitful Media: Artificial Intelligence and Social Life after the Turing Test. Oxford University Press.

Nye, D. (2003). America as Second Creation. MIT Press.

Nye, D. (1994). American Technological Sublime. MIT Press.

Orr, W., & Kang, E. (2023, November 9). AI as a sport: On the competitive epistemologies of benchmarking. Society for the Social Studies of Science [4S] Annual Meeting, Honolulu.

Pfaffenberger, B. (1992). Technological Dramas. Science, Technology & Human Values, 17(3), 282–312. https://doi.org/10.1177/016224399201700302

Raji, I. D., Bender, E. M., Paullada, A., Denton, E., & Hanna, A. (2021). AI and the Everything in the Whole Wide World Benchmark. NeurIPS 2021.

Raley, R., & Rhee, J. (2023). Critical AI: A Field in Formation. American Literature, 95(2), 185–204. https://doi.org/10.1215/00029831-10575021

Rao, D. (2023, December 20). The Great AI Weirding [Substack newsletter]. AI Research & Strategy. https://deliprao.substack.com/p/the-great-ai-weirding

Rappaport, R. (1999). Ritual and Religion in the Making of Humanity. Cambridge University Press.

Ravetto-Biagioli, K. (2019). Digital Uncanny. Oxford University Press.

Riskin, J. (2003). The Defecating Duck, or, the Ambiguous Origins of Artificial Life. Critical Inquiry, 29(4), 599–633. https://doi.org/10.1086/377722

Roberts, S. T., & Hogan, M. (2019). Left Behind: Futurist Fetishes, Prepping and the Abandonment of Earth. B2o: An Online Journal, 4(2).

Roose, K. (2022, September 2). An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy. The New York Times. https://www.nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.html

Roose, K. (2023, February 3). How ChatGPT Kicked Off an A.I. Arms Race. The New York Times. https://www.nytimes.com/2023/02/03/technology/chatgpt-openai-artificial-intelligence.html

Rose, J. (2022, September 7). Why Does This Horrifying Woman Keep Appearing in AI-Generated Images? Vice. https://www.vice.com/en/article/g5vjw3/why-does-this-horrifying-woman-keep-appearing-in-ai-generated-images

Routhier, D. (2023). With and Against: The Situationist International in the Age of Automation. Verso.

Sadowski, J. (2018, August 6). Potemkin AI. Real Life. http://reallifemag.com/potemkin-ai/

Salvaggio, E. (2023, October 29). The Hypothetical Image. Cybernetic Forests. https://www.cyberneticforests.com/news/social-diffusion-amp-the-seance-of-the-digital-archive

Sam Altman [@sama]. (2022, February 4). Techno-optimism is the only good solution to our current problems. Unfortunately, somehow expressing optimism about the future has become a radical act. [Tweet]. Twitter. https://twitter.com/sama/status/1489648210842816516

Sampson, T. D. (2017). The Assemblage Brain: Sense Making in Neuroculture. University of Minnesota Press.

Shapin, S., & Schaeffer, S. (1985). Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life. Princeton University Press.

Shapiro, A. (2018). Between autonomy and control: Strategies of arbitrage in the “on-demand” economy. New Media & Society, 20(8), 2954–2971. https://doi.org/10.1177/1461444817738236

Smith, H., & Burrows, R. (2021). Software, Sovereignty and the Post-Neoliberal Politics of Exit. Theory, Culture & Society, 38(6), 143–166. https://doi.org/10.1177/0263276421999439

Stark, L. (2023a). Breaking Up (with) AI Ethics. American Literature, 95(2), 365–379. https://doi.org/10.1215/00029831-10575148

Stark, L. (2023b, March 16). ChatGPT is Mickey Mouse. Daily Nous. https://dailynous.com/2023/03/16/philosophers-on-next-generation-large-language-models/

Statement on AI Risk. (2023, June 2). Center for AI Safety. https://www.safe.ai/statement-on-ai-risk#signatories

Stilgoe, J. (2023). We need a Weizenbaum test for AI. Science, 381(6658), eadk0176. https://doi.org/10.1126/science.adk0176

Suchman, L. (2023). The uncontroversial ‘thingness’ of AI. Big Data & Society, 10(2). https://doi.org/10.1177/20539517231206794

Steyerl, H. (2016). A sea of data: apophenia and pattern recognition. e-flux, 72. http://www.e-flux.com/journal/a-sea-of-data-apophenia-and-pattern-mis-recognition/

Taylor, A. (2018). The Automation Charade. Logic, 5. https://logicmag.io/failure/the-automation-charade/

Thatcher, J., O’Sullivan, D., & Mahmoudi, D. (2016). Data colonialism through accumulation by dispossession: New metaphors for daily data. Environment and Planning D: Society and Space, 34(6), 990–1006. https://doi.org/10.1177/026377581663319

Turing, A. (1950). Computing Machinery and Intelligence. Mind, New Series, 59(236), 433–460. https://doi.org/10.1093/mind/LIX.236.433

Turner, F. (2016). Prototype. In B. Peters (Ed.), Digital Keywords: A Vocabulary of Information, Society & Culture (pp. 256–268). Princeton University Press.

Turner, V. (1976). Social Dramas and Ritual Metaphors. In R. Schechner & M. Schumann (Eds.), Ritual, Play, and Performance: Readings in The Social Sciences / Theatre (pp. 97–120). The Seabury Press.

Turow, J., Hennessy, M., & Draper, N. (2015). The Tradeoff Fallacy: How Marketers Are Misrepresenting American Consumers And Opening Them Up To Exploitation. Annenberg School for Communication, University of Pennsylvania.

UN tech agency rolls out human-looking robots for questions at a Geneva news conference. (2023, July 7). AP News. https://apnews.com/article/humanoid-robots-better-leaders-ai-geneva-486bb2bad260454a28aaa51ea31580a6

Utrata, A. (2023). Engineering Territory: Space and Colonies in Silicon Valley. American Political Science Review, 1–13. https://doi.org/10.1017/S0003055423001156

Vidler, A. (1999). The Architectural Uncanny: Essays inthe Modern Unhomely. MIT Press.

Vinsel, L. (2021). You’re Doing It Wrong: Notes on Criticism and Technology Hype. Medium. https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5

Wang, A., Kapoor, S., Barocas, S., & Narayanan, A. (2023). Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 626. https://doi.org/10.1145/3593013.3594030

Watson, S. M. (2014, June 16). Data Doppelgängers and the Uncanny Valley of Personalization. The Atlantic. http://www.theatlantic.com/technology/archive/2014/06/data-doppelgangers-and-the-uncanny-valley-of-personalization/372780/

Weatherby, L. (2023, November 13). ChatGPT broke the Turing test. The Boston Globe. https://www.bostonglobe.com/2023/11/13/opinion/turing-test-ai-chatgpt/

Weatherby, L., & Justie, B. (2022). Indexical AI. Critical Inquiry, 48(2), 381–415. https://doi.org/10.1086/717312

Weiss, D. C. (2023, March 16). Latest version of ChatGPT aces bar exam with score nearing 90th percentile. ABA Journal. https://www.abajournal.com/web/article/latest-version-of-chatgpt-aces-the-bar-exam-with-score-in-90th-percentile

Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment to Calculation. W.H. Freeman and Company.

Whittaker, M. (2021). The Steep Cost of Capture. ACM Interactions, November-December, 51–55. https://doi.org/10.1145/3488666

Wong, M. (2023, June 2). AI Doomerism Is a Decoy. The Atlantic. https://www.theatlantic.com/technology/archive/2023/06/ai-regulation-sam-altman-bill-gates/674278/

Article cover image

Downloads

Published

2024-12-31

Issue

Section

Research Articles