Automated hiring platforms as mythmaking machines and their symbolic economy

Authors

  • Etienne Grenier Institut national de la recherche scientifique
  • Nicolas Chartier-Edwards Institut national de la recherche scientifique

DOI:

https://doi.org/10.33621/jdsr.v6i440459

Keywords:

Automated hiring, HireVue, Reddit, Nonknowledge, Political economy

Abstract

Automated hiring platforms offer critical Artificial Intelligence (AI) researchers a privileged site for the study of technological controversies and their unfolding as they often end up entangled in scandals. In 2019, an official complaint filed with the Federal Trade Commission (FTC) outlines the main controversy created by such systems, directly targeting HireVue’s platform. According to the FTC, these systems “evaluate a job applicant’s qualifications based upon their appearance by means of an opaque, proprietary algorithm” (FTC, 2019, p.1). Existing critical literature outlines how Basic Emotions theory (Ekman, 1999) and its deployment through an AI-powered system fuel ventures such as automated hiring platforms (Stark, 2018). Therein lies a form of opportunism where the use of available audiovisual data and the revalorization of heavily criticized and simplistic theories about the human mind create discriminatory automated decision-making grounded in bogus science (Crawford, 2021). Even if HireVue slightly modified its product by removing the video analysis component in reaction to the critiques formulated by regulatory bodies, the company kept AI-powered voice analysis, its one-sided video interview technique and the multitude of gaming assignments designed to evaluate candidates. Going beyond scandals and hype, this article aims to tackle the inevitable political economy generated by automated hiring systems. Indeed, job seekers are confronted with a system that disrupts a well-established, sociologically stable technique, the interview, and its technical object counterpart, the resume. We assert that the gaps of nonknowledge (Beer, 2023) created by the one-sided interviews break the commonly accepted interactionist framework that originally informed the hiring process. Using HireVue as a case study, we lay bare the political economy of AI-powered automated hiring platforms, highlighting how individuals assess their capacity to acquire agency in the face of opaque technologies.

Author Biographies

Etienne Grenier, Institut national de la recherche scientifique

Etienne Grenier is an artist and researcher working in the field of digital cultures. Currently a PhD candidate at the Institut national de la recherche scientifique in Montréal, he studies the impacts of datatification on cultural production and contributes to Project Shaping AI. Maintaining an active creative practice in digital arts, his installations and performances have been presented in leading institutions and major festivals in Europe and the Americas.

Nicolas Chartier-Edwards, Institut national de la recherche scientifique

Nicolas Chartier-Edwards is a PhD student at Institut national de la recherche scientifique (INRS) in Québec City, Canada, where he pursues a custom program in Politics, Science and Technology. His thesis focuses on the transformation of the Canadian Federal State’s statecraft by the deployment of artificial intelligence in the realm of legislative, executive and judiciary power. His broader researches in the context of the Shaping 21st Century AI – Controversies and Closure in Media, Policy, and Research project investigates the political economy of innovation clusters as well as the social transformations prompted by the technoscientific shift. As of fall 2023, Nicolas will be an invited researcher at MédiaLab SciencePo Paris.

References

Adams, N. N. (2024). “Scraping” Reddit posts for academic research? Addressing some blurred lines of consent in growing internet-based research trend during the time of Covid-19. International Journal of Social Research Methodology, 27(1). 47-62. https://doi.org/10.1080/13645579.2022.2111816

Amoore, L. (2023). Machine learning political orders. Review of International Studies, 49(1), 20–36. https://doi.org/10.1017/S0260210522000031

Associated Press. (2022, March 12th). U.S. warns of discrimination in using artificial intelligence to screen job candidates. https://www.npr.org/2022/05/12/1098601458/artificial-intelligence-job-discrimination-disabilities

Bataille, G. (1967). La part maudite. Les Éditions de Minuit.

Bataille, G. (2011). La notion de dépense. Lignes.

Bataille, G. (2011). L’érotisme. Les Éditions de Minuit.

Baumgartner, J., Zannettou, S., Keegan, B., Squire, M. & Blackburn, J. (2020, May). The pushshift Reddit dataset. Proceedings of the international AAAI conference on web and social media. 14. 830–839

Beer, D. (2022). The problem of researching a recursive society: Algorithms, data coils and the looping of the social. Big Data & Society, 9(2). https://doi.org/10.1177/20539517221104997

Beer, D. (2023). The tensions of algorithmic thinking: automation, intelligence, and the politics of knowing. Bristol University Press.

Borup, M., Brown N., Konrad, K. & Van Lente, H. (2006). The sociology of expectations in science and technology. Technology Analysis & Strategic Management, 18 (3–4), 285–298. https://doi.org/10.1080/09537320600777002

Brunskill, V.-L. (2019, April). What is HireVue?. TechTarget: HR Software. https://www.techtarget.com/searchhrsoftware/definition/HireVue#:~:text=Along%20with%20voice%20recognition%20and,language%2C%20tone%2C%20and%20keywords.

Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1). https://doi.org/10.1177/2053951715622512

Callon, M. (2006). Pour une sociologie des controverses technologiques. In Madeleine Akrich, Michel Callon & Bruno Latour (Eds.), Sociologie de la traduction : Textes fondateurs (pp. 135‑157). Presses des Mines.

Cardon, D., Cointet, J. P., & Mazières, A. (2018). La revanche des neurones. Réseaux, 211(5), 173-220. https://doi.org/10.3917/res.211.0173

Cetina, K. K. & Bruegger, U. (2002). Global Microstructures: The Virtual Societies of Financial Markets. American Journal of Sociology. 107 (4). 905-950. https://doi.org/10.1086/341045

Cetina, K. K. (2009), The Synthetic Situation: Interactionism for a Global World. Symbolic Interaction. 32. 61-87. https://doi.org/10.1525/si.2009.32.1.61

Christin, A. (2020). The ethnographer and the algorithm: beyond the black box. Theory

and Society, 49 (5–6), 897–918. https://doi.org/10.1007/s11186-020-09411-3

Clarke, K. (2021). Assistant Attorney General Kristen Clarke Delivers Keynote on AI and Civil Rights for the Department of Commerce’s National Telecommunications and Information Administration’s Virtual Listening Session. Office of Public Affairs, U.S. Department of Justice. Wahsington DC. USA. https://www.justice.gov/opa/speech/assistant-attorney-general-kristen-clarke-delivers-keynote-ai-and-civil-rights-department

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Crawford, K. (2021, April 27). Artificial Intelligence is Misreading Human Emotion. The Atlantic. https://www.theatlantic.com/technology/archive/2021/04/artificial-intelligence-misreading-human-emotion/618696/

Doctorow, C. (2019, October 23). When the HR department is a robotic phrenologist: “face-scanning algorithm” gains popularity as a job—applicant screener. Boing Boing. https://boingboing.net/2019/10/23/myers-briggs-2-0.html

Dunn, A. (2023, July 3rd). Job Candidate Accuses CVS Health of Using AI-Powered “Lie Detector Screening” Without Proper Notice. ALM Law.com. https://www.law.com/2023/07/03/job-candidate-accuses-cvs-health-of-using-ai-powered-lie-detector-screening-without- proper-notice/

Ekman, P. (1992). Are there basic emotions? Psychological Review, 99(3), 550–553. https://doi.org/10.1037/0033-295X.99.3.550

Farrell, H. & Fourcade, M. (2023). The Moral Economy of High-Tech Modernism. Dædalus, 152(1), 225-235. https://doi.org/10.1162/daed_a_01982

Ferrari, F. & Graham, M. (2021). Fissures in algorithmic power: platforms, code, and contestation. Cultural Studies, 35(4–5), 814–832. https://doi.org/10.1080/09502386.2021.1895250

Fourcade, M., Beckert, J., Fligstein, N., & Carruthers, B. G. (2023). Reflections on the field of socio-economics. Socio-Economic Review, 21(2), 703–720. https://doi.org/10.1093/ser/mwad014

Goffman, E. (1956). The presentation of Self in Everyday Life. University of Edinburgh Social Sciences Research Centre.

Goffman, E. (1967). Interaction Ritual. Pantheon Books.

Goffman, E. (1983). The Interaction Order: American Sociological Association, 1982 Presidential Address. American Sociological Review 48(1), 1–17. https://doi.org/10.2307/2095141

HireVue. (2019). How to Crack the Technical Hiring Code [Webinar Recap]. https://www.hirevue.com/blog/hiring/how-to-crack-the-technical-hiring-code-webinar-recap

HireVue. (2023, February 2nd). https://www.hirevue.com/

HireVue. (2022). Explainability Statement. https://www.hirevue.com/legal/ai-explainability-statement

Hunkenschroer, A. L., Luetge, C. (2022). Ethics of AI‐Enabled Recruiting and Selection: A Review and Research Agenda. Journal of Business Ethics. (178), 977-1007. https://doi.org/10.1007/s10551-022-05049-6

Johnston, L. (2023, May 21st). A Milton resident’s lawsuit against CVS raises questions about the use of AI lie detectors in hiring. The Boston Globe. https://www.bostonglobe.com/2023/05/21/business/can-ai-help-employers-screen-honesty/

Kellog, C., Valentine, A. M., & Christin, A. (2020). Algorithms at Work: The New Contested Terrain of Control. Academy of Management Annals, 14(1). 366-410. https://doi.org/10.5465/annals.2018.0174

Knight, W. (2021, January 12). Job Screening Service Halts Facial Analysis of Applicants. Wired. https://www.wired.com/story/job-screening-service-halts-facial-analysis-applicants/

Latzko-Toth, G., Bonneau, C., & Millette, M. (2022). Small Data, Thick Data: Data Thickening Strategies for Social Media Research. In Anabel Quan-Haase & Luke Sloan (Eds.), The SAGE Handbook of Social Media Research Methods, (Vol. 0, pp. 157–172). SAGE Publications ltd. https://doi.org/10.4135/9781529782943

Lepage-Richer, T. (2021). Adversariality in machine learning systems: on neural networks and the limits of knowledge. The Cultural Life of Machine Learning: An Incursion into Critical AI Studies, 197–225. https://doi.org/10.1007/978-3-030-56286-1_7

Low, D. M., Rumker, L., Talkar, T., Torous, J., Cecchi, G., & Ghosh, S. S. (2020). Natural language processing reveals vulnerable mental health support groups and heightened health anxiety on Reddit during covid-19: Observational study. Journal of Medical Internet Research, 22(10), e22635. https://doi.org/10.2196/22635

Macospol. (2007). Consortium Agreement Annex I, p. 6. Unpublished document submitted to the European Union, 5 November.

Marres, N. (2020). For a situational analytics: An interpretative methodology for the study of situations in computational settings. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720949571

Marx, K. (1968). Le capital : livre 1. Gallimard.

Maturana, H. & Varela, F. J. (1980). Autopoieisis and Cognition The Realization of the Living. D. Reidel publishing co.

McKelvey, F. & Roberge, J. (2023). Recursive power: AI governmentality and technofutures. In Simon Lindgren (Ed.), Handbook of Critical Studies of Artificial Intelligence, ([Manuscript submitted for publication], pp. 21–32). Edward Elgar Publishing Limited.

Personnel Today. (2022, 17th March). Estée Lauder staff win payout after being “sacked by algorithm”. https://www.personneltoday.com/hr/estee-lauder-women-sacked-by-algorithm-redundancy-software-hirevue-automation/

Rahman, H. A. (2021). The Invisible Cage: Workers’ Reactivity to Opaque Algorithmic Evaluations. Administrative Science Quarterly, 66(4), 945—988. https://doi.org/10.1177/00018392211010118

Rappin, B. (2018). Algorithme, management, crise : le triptyque cybernétique du gouvernement de l’exception permanente. Quaderni, 103-114.‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬ https://doi.org/10.4000/quaderni.1182

Roberge, J., Morin, K., & Senneville, M. (2019). Deep Learning’s Governmentality. AI Critique| Volume, 123. https://doi.org/10.1515/9783839447192-008

Roberge, J., Senneville, M., & Morin, K. (2020). How to translate artificial intelligence? Myths and justifications in public discourse. Big Data & Society, 7(1). https://doi.org/10.1177/205395172091996

Roberge, J., & Castelle, M. (2021). Toward an end-to-end sociology of 21st-century machine learning. The cultural life of machine learning: An incursion into critical AI studies, 1–29. https://doi.org/10.1007/978-3-030-56286-1_1

Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big data & society, 4(2). https://doi.org/10.1177/2053951717738104

Seaver, N. (2022). Computing taste: algorithms and the makers of music recommendation. University of Chicago Press.

Slemon, A., McAuliffe, C., Goodyear, T., McGuinness, L., Shaffer, E., & Jenkins, E. K. (2021). Reddit users’ experiences of suicidal thoughts during the CoViD-19 pandemic: A qualitative analysis of r/Covid19_support posts. Frontiers in Public Health, 1175 [August 2021].

Sloane, M., Moss, E. & Chowdhury, R. (2022). A Silicon Valley love triangle: Hiring algorithms, pseudo-science, and the quest for auditability. Patterns. (3), 1–9. https://doi.org/10.1016/j.patter.2021.100425

Sloane, M. (2023). Automation and Recruiting: Understanding the Intersection of Algorithmic Systems and Professional Discretion in the Sourcing of Job Candidates. Proceedings “Automation by Design” Conference, University of Minnesota, Forthcoming, 1–19. http://dx.doi.org/10.2139/ssrn.4516453

Stark, L. (2018). Algorithmic psychometrics and the scalable subject. Social Studies of Science, 48(2), pre-print, 1–45.

Stark, L. (2020). The emotive politics of digital mood tracking. New media & society, 22(11), 2039–2057. https://doi.org/10.1177/1461444820924624

Stark, L. & Hoey, J. (2021, March 3–10). The Ethics of Emotion in Artificial Intelligence Systems. Proceedings FAcct ’21, virtual event, Canada. 782-793. https://doi.org/10.1145/3442188.3445939

Stark, L., & Hutson, J. (2021). Physiognomic artificial intelligence. Fordham Intell. Prop. Media & Ent. LJ, 32, 922.

Thibodeau, P. (2023, June 5th). As HR adopts AI in hiring, the risks are mounting. TechTarget: HR Software. https://www.techtarget.com/searchhrsoftware/news/366539240/As-HR-adopts-AI-in-hiring-the-risks-are-mounting

Tomkins, Silvan S. 2008. Affect Imagery Consciousness: The Complete Edition: Two Volumes. Springer Publishing Company.

Venturini, T. (2010). Diving in magma: how to explore controversies with actor-network theory. Public Understanding of Science, 19(3), 258–273. https://doi.org/10.1177/0963662509102694

Willner, K. & Saba, C. (2022). Class Action Targeting Video Interview Technology Reminds Employers of Testing Risks. Paul Hastings. https://www.paulhastings.com/insights/client-alerts/class-action-targeting-video-interview-technology-reminds-employers-of

Article cover image

Downloads

Published

2024-12-31

Issue

Section

Research Articles

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.