Beyond or within the binary? Constructing gendered meanings in generative AI use

Authors

DOI:

https://doi.org/10.47989/ir31iConf64121

Keywords:

Gender perception in GenAI, Digital performativity, Information equity, Sociocultural framing, Technological justice

Abstract

Introduction. GenAI is increasingly embedded in daily life, yet its ostensibly neutral design often becomes a site for gendered interpretations. This study examines how Gen Z users in China perceive and construct gendered traits in GenAI, situating their accounts within broader concerns of neutrality and equity in information society.

Method. Semi-structured interviews were conducted with 12 participants who regularly use multiple GenAI platforms. Discussions explored textual, visual, and auditory cues, user projections, and contextual triggers.

Analysis. Thematic analysis, informed by Butler’s theory of performativity and Haraway’s cyborg imaginary, was applied to 73,480 words of transcripts. Coding identified patterns in participants’ accounts of gendered cues, user projection, contextual attribution, and reflections on neutrality.

Results. Participants consistently mapped rational, didactic tones onto masculine authority and empathetic language onto feminine care. Avatars and voices anchored gender perceptions, while mimicry highlighted both adaptability and artificiality. Attribution also drew on cultural repertoires, personal experiences, and task contexts.

Conclusion. Neutrality in GenAI does not erase gender but becomes a site of projection and negotiation. The findings show how such systems can reproduce stereotypes while also enabling more inclusive, post-binary imaginaries, underscoring the need for critical AI literacy and designs that advance information equity.

References

Alvarez, J. M., Colmenarejo, A. B., Elobaid, A., Fabbrizzi, S., Fahimi, M., Ferrara, A., Ghodsi, S., Mougan, C., Papageorgiou, I., Reyero, P., Russo, M., Scott, K. M., State, L., Zhao, X., & Ruggieri, S. (2024). Policy advice and best practices on bias and fairness in AI. Ethics and Information Technology, 26(2), Article 31. https://doi.org/10.1007/s10676-024-09746-w

Baumer, E. P. S., Taylor, A. S., Brubaker, J. R., & McGee, M. (2024). Algorithmic subjectivities. ACM Transactions on Computer-Human Interaction, 31(3), Article 15. https://doi.org/10.1145/3660344

Benjamin, R. (2019). Race after technology: Abolitionist tools for the New Jim Code. Polity.

Bernotat, J., Eyssel, F., & Sachse, J. (2021). The (fe)male robot: How robot body shape impacts first impressions and trust towards robots. International Journal of Social Robotics, 13(3), 477–489. https://doi.org/10.1007/s12369-019-00562-7

Blagoev, B., Hernes, T., Kunisch, S., & Schultz, M. (2024). Time as a research lens: a conceptual review and research agenda. Journal of Management, 50(6), 2152–2196. https://doi.org/10.1177/01492063231215032

Bridges, L. M., McElroy, K., & Welhouse, Z. (2024). Generative artificial intelligence: 8 critical questions for libraries. Journal of Library Administration, 64(1), 66–79. https://doi.org/10.1080/01930826.2024.2292484

Butler, J. (1990). Gender trouble: Feminism and the subversion of identity. Routledge.

Carter, L., & Liu, D. (2025). How was my performance? Exploring the role of anchoring bias in AI-assisted decision making. International Journal of Information Management, 82, 102875. https://doi.org/10.1016/j.ijinfomgt.2025.102875

Colas, C., Karch, T., Moulin-Frier, C., & Oudeyer, P.-Y. (2022). Language and culture internalization for human-like autotelic AI. Nature Machine Intelligence, 4(12), 1068–1076. https://doi.org/10.1038/s42256-022-00591-4

Curry, A. C., Robertson, J., & Rieser, V. (2020). Conversational assistants and gender stereotypes: public perceptions and desiderata for voice personas. In M. R. Costa-jussà, C. Hardmeier, W. Radford, & K. Webster (Eds.), Proceedings of the Second Workshop on Gender Bias in Natural Language Processing (pp. 72–78). Association for Computational Linguistics. https://aclanthology.org/2020.gebnlp-1.7/

Das, A., & Chanda, D. (2023). To trust or not to trust cybots: Ethical dilemmas in the posthuman organization. In A. Nayyar, M. Naved, & R. Rameshwar (Eds.), New horizons for Industry 4.0 in modern business (pp. 189–208). Springer International Publishing. https://doi.org/10.1007/978-3-031-20443-2_9

De Cet, M., Obaid, M., & Torre, I. (2025). Breaking the binary: a systematic review of gender-ambiguous voices in human–computer interaction. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1–17). ACM. https://doi.org/10.1145/3706598.3713608

Depounti, I., Saukko, P., & Natale, S. (2023). Ideal technologies, ideal women: AI and gender imaginaries in Redditors’ discussions on the Replika bot girlfriend. Media, Culture & Society, 45(4), 720–736. https://doi.org/10.1177/01634437221119021

Dogruel, L., & Joeckel, S. (2024). Gender stereotypes and voice assistants: do users’ gender and conversation topic matter? Behaviour & Information Technology, 43(10), 1913–1923. https://doi.org/10.1080/0144929X.2023.2235021

Duan, W., McNeese, N., & Li, L. (2025). Gender stereotypes toward non-gendered generative AI: the role of gendered expertise and gendered linguistic cues. Proceedings of the ACM on Human–Computer Interaction, 9(CSCW), 1–35. https://doi.org/10.1145/3701197

Giantini, G. (2023). The sophistry of the neutral tool. Weaponizing artificial intelligence and big data into threats toward social exclusion. AI and Ethics, 3(4), 1049–1061. https://doi.org/10.1007/s43681-023-00311-7

Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057

Haraway, D. (2006). A cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. In S. Stryker & S. Whittle (Eds.), The transgender studies reader (pp. 103-118). Routledge.

Hipólito, I., Winkle, K., & Lie, M. (2023). Enactive artificial intelligence: subverting gender norms in human–robot interaction. Frontiers in Neurorobotics, 17, 1149303. https://doi.org/10.3389/fnbot.2023.1149303

Hou, T.-Y., Tseng, Y.-C., & Yuan, C. W. (Tina). (2024). Is this AI sexist? The effects of a biased AI’s anthropomorphic appearance and explainability on users’ bias perceptions and trust. International Journal of Information Management, 76, 102775. https://doi.org/10.1016/j.ijinfomgt.2024.102775

Kordzadeh, N., & Ghasemaghaei, M. (2022). Algorithmic bias: Review, synthesis, and future research directions. European Journal of Information Systems, 31(3), 388–409. https://doi.org/10.1080/0960085X.2021.1927212

Krishna, B., Krishnan, S., & Sebastian, M. (2025). Understanding the process of building institutional trust among digital payment users through national cybersecurity commitment trustworthiness cues: A critical realist perspective. Information Technology & People, 38(2), 714–756. https://doi.org/10.1108/ITP-05-2023-0434

Lenharo, M. (2024). ChatGPT turns two: how the AI chatbot has changed scientists’ lives. Nature, 636(8042), 281–282. https://doi.org/10.1038/d41586-024-03940-y

Letheren, K., Jetten, J., Roberts, J., & Donovan, J. (2021). Robots should be seen and not heard…sometimes: anthropomorphism and AI service robot interactions. Psychology & Marketing, 38(12), 2393–2406. https://doi.org/10.1002/mar.21575

Lund, B. D., Mannuru, N. R., & Agbaji, D. (2024). AI anxiety and fear: a look at perspectives of information science students and professionals towards artificial intelligence. Journal of Information Science. Advance online publication. https://doi.org/10.1177/01655515241282001

Naeem, M., Ozuem, W., Howell, K., & Ranfagni, S. (2023). A step-by-step process of thematic analysis to develop a conceptual model in qualitative research. International Journal of Qualitative Methods, 22, 16094069231205789. https://doi.org/10.1177/16094069231205789

Nataraja, M. (2025). Bold moves: Redefining soft skills for Gen Z and beyond. Notion Press.

Panarese, P., Grasso, M. M., & Solinas, C. (2025). Algorithmic bias, fairness, and inclusivity: a multilevel framework for justice-oriented AI. AI & Society. Advance online publication. https://doi.org/10.1007/s00146-025-02451-2

Rakowski, R., & Kowaliková, P. (2024). The political and social contradictions of the human and online environment in the context of artificial intelligence applications. Humanities and Social Sciences Communications, 11(1), 289. https://doi.org/10.1057/s41599-024-02725-y

Scheffer-Wentz, H. G. (2025). Negotiating gender: a qualitative analysis of trans-parasocial relationships with Gen Z social media users (Publication No. 3216782833) [Doctoral dissertation, University of North Dakota]. ProQuest Dissertations and Theses Global. https://www.proquest.com/dissertations-theses/negotiating-gender-qualitative-analysis-trans/docview/3216782833/se-2?accountid=13151

Spennemann, D. H. (2025). What do librarians look like? Stereotyping of a profession by generative AI. Journal of Librarianship and Information Science. Advance online publication. https://doi.org/10.1177/09610006251357286

Spielmann, J., & Stern, C. (2024). Preferences for gender stereotypicality in artificial intelligence: existence, comparison to human biases, and implications for choice. Personality and Social Psychology Bulletin. Advance online publication. https://doi.org/10.1177/01461672241307276

Sutko, D. M. (2020). Theorizing femininity in artificial intelligence: a framework for undoing technology’s gender troubles. Cultural Studies, 34(4), 567–592. https://doi.org/10.1080/09502386.2019.1671469

Torres-Martínez, S. (2025). Dehumanizing the human, humanizing the machine: organic consciousness as a hallmark of the persistence of the human against the backdrop of artificial intelligence. AI & Society, 40(6), 4635–4653. https://doi.org/10.1007/s00146-024-02165-x

Vallis, C., Wilson, S., & Casey, A. (2025). Fear and awe: making sense of generative AI through metaphor. Journal of Interactive Media in Education, 2025(1), Article 14. https://doi.org/10.5334/jime.972

Wagner, T. L., Marsh, D., & Curliss, L. (2025). Theories and implications for centering Indigenous and queer embodiment within sociotechnical systems. Journal of the Association for Information Science and Technology, 76(2), 397–412. https://doi.org/10.1002/asi.24746

Xavier, B. (2025). Biases within AI: challenging the illusion of neutrality. AI & Society, 40(3), 1545–1546. https://doi.org/10.1007/s00146-024-01985-1

Zhao, Y., & Wu, W. (2025). Inclusive, expressive, connective: how lifestyle sports shape youth culture in China. The Journal of Chinese Sociology, 12(1), Article 8. https://doi.org/10.1186/s40711-025-00233-3

Downloads

Published

2026-03-20

How to Cite

Lai, Z., & Tang, Y. (2026). Beyond or within the binary? Constructing gendered meanings in generative AI use. Information Research an International Electronic Journal, 31(iConf), 328–344. https://doi.org/10.47989/ir31iConf64121

Issue

Section

Conference proceedings

Similar Articles

<< < 4 5 6 7 8 9 10 11 12 13 > >> 

You may also start an advanced similarity search for this article.