Understanding user behaviors and satisfaction in LLM-assisted tasks: prompting strategies and user knowledge states among college students
DOI:
https://doi.org/10.47989/ir31iConf64164Keywords:
Generative AI, User experience, Topic modelling, Sentiment analysis, Five elements of user experienceAbstract
Introduction. This study examines how users interact with large language models (LLMs), focusing on the interplay between users’ knowledge states and prompt strategies in shaping satisfaction.
Method. Data were collected from 39 students, yielding 187 valid task records. Participants reported knowledge states and satisfaction and submitted dialogues for analysis. Prompt strategies and the character of LLM were coded, and a strategy matching rate was proposed as a novel indicator of LLM application ability.
Analysis. Analyses combined descriptive statistics, correlation analyses, Mann–Whitney U tests, and mixed-effects logistic regression, with intercoder reliability established through iterative agreement.
Results. Directive, contextual, and iterative refinement strategies were most frequently observed, with notable discrepancies between self-reported and coded strategy use. Regression analyses showed that non-delegability and creativity increased the likelihood of satisfaction exceeding expectations, while topic familiarity and knowledge status effects reduced it. Chain-of-thought and self-consistency strategies positively predicted satisfaction, whereas directive and role prompting strategies had negative effects.
Conclusion(s). Satisfaction in LLM-assisted tasks emerges from the interaction of users’ knowledge states, task characteristics, and prompting strategies. By introducing strategy matching rate as an empirical indicator of LLM application ability, this study contributes a practical approach to assessing AI literacy and advances understanding of human–LLM collaboration.
References
Ali, S. J., Reinhartz-Berger, I., & Bork, D. (2025). How are LLMs Used for Conceptual Modeling? An Exploratory Study on Interaction Behavior and User Perception. In W. Maass, H. Han, H. Yasar, & N. Multari (Eds.), Conceptual Modeling (pp. 257–275). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-75872-0_14
Belkin, N. (2014). Anomalous States of Knowledge as A Basis for Information Retrieval. Canadian Journal of Information Science, 133–143.
Divekar, R. R., Gonzalez, L., Guerra, S., Boos, N., & Divekar, R. (2025). Can Generative AI Replace Search Engines for Learning? Understanding Student Preferences, Use and Perceived Proficiency in Using AI. TechTrends. https://doi.org/10.1007/s11528-025-01095-9
Eigner, E., & Händler, T. (2024). Determinants of LLM-assisted Decision-Making (No. arXiv:2402.17385). arXiv. https://doi.org/10.48550/arXiv.2402.17385
Glaser, B., & Strauss, A. (2017). Discovery of grounded theory: Strategies for qualitative research. Routledge.
Holstein, J., Diener, M., & Spitzer, P. (2025). From Consumption to Collaboration: Measuring Interaction Patterns to Augment Human Cognition in Open-Ended Tasks (No. arXiv:2504.02780). arXiv. https://doi.org/10.48550/arXiv.2504.02780
Ju, B., & Stewart, J. B. (2024). Empowering Users with ChatGPT and Similar large language models (LLMs): Everyday Information Needs, Uses, and Gratification. Proceedings of the Association for Information Science and Technology, 61(1), 172–182. https://doi.org/10.1002/pra2.1018
Kacperski, C., Ulloa, R., Bonnay, D., Kulshrestha, J., Selb, P., & Spitz, A. (2025). Characteristics of ChatGPT users from Germany: Implications for the digital divide from web tracking data. PLOS ONE, 20(1), e0309047. https://doi.org/10.1371/journal.pone.0309047
Kim, S., Eun, J., Park, Y. E., Lee, K., Lee, G., & Lee, J. (2025). PromptPilot: Exploring User Experience of Prompting with AI-Enhanced Initiative in LLMs. International Journal of Human–Computer Interaction, 1–23. https://doi.org/10.1080/10447318.2025.2489030
Kim, Y., Lee, J., Kim, S., Park, J., & Kim, J. (2024). Understanding Users’ Dissatisfaction with ChatGPT Responses: Types, Resolving Tactics, and the Effect of Knowledge Level. Proceedings of the 29th International Conference on Intelligent User Interfaces, 385–404. https://doi.org/10.1145/3640543.3645148
Lee, S., & Park, G. (2023). Exploring the Impact of ChatGPT Literacy on User Satisfaction: The Mediating Role of User Motivations. Cyberpsychology, Behavior, and Social Networking, 26(12), 913–918. https://doi.org/10.1089/cyber.2023.0312
McCabe, J. (2011). Metacognitive awareness of learning strategies in undergraduates. Memory & cognition, 39(3), 462-476.
Mokhtari, K., & Reichard, C. A. (2002). Assessing students' metacognitive awareness of reading strategies. Journal of educational psychology, 94(2), 249.
Open AI, & Shieh, J. (2024). Best practices for prompt engineering with the OpenAI API.
Palta, S., Chandrasekaran, N., Rudinger, R., & Counts, S. (2025). Speaking the Right Language: The Impact of Expertise Alignment in User-AI Interactions (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2502.18685
Phillips, N., Kalvapalle, S., & Kennedy, M. (2024). Beyond the Turing Test: Exploring the implications of generative AI for category construction. Organisation Theory, 5(3), 26317877241275113. https://doi.org/10.1177/26317877241275113
Prompt Engineering for AI Guide. (n.d.). Google Cloud. https://cloud.google.com/discover/what-is-prompt-engineering?hl=en
Schraw, G., & Dennison, R. S. (1994). Assessing metacognitive awareness. Contemporary educational psychology, 19(4), 460-475.
Song, Y., Huang, L., Zheng, L., Fan, M., & Liu, Z. (2025). Interactions with generative AI chatbots: Unveiling dialogic dynamics, students’ perceptions, and practical competencies in creative problem-solving. International Journal of Educational Technology in Higher Education, 22(1), 12. https://doi.org/10.1186/s41239-025-00508-2
Stadler, M., Bannert, M., & Sailer, M. (2024). Cognitive ease at a cost: LLMs reduce mental effort but compromise depth in student scientific inquiry. Computers in Human Behavior, 160, 108386. https://doi.org/10.1016/j.chb.2024.108386
Tang, M.-C., Liu, Y.-H., & Wu, W.-C. (2013). A study of the influence of task familiarity on user behaviors and performance with a MeSH term suggestion interface for PubMed bibliographic search. International Journal of Medical Informatics, 82(9), 832–843. https://doi.org/10.1016/j.ijmedinf.2013.04.005
Wang, J., Ma, W., Sun, P., Zhang, M., & Nie, J.-Y. (2024). Understanding User Experience in Large Language Model Interactions (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2401.08329
Zhu, G., Sudarshan, V., Kow, J. F., & Soon Ong, Y. (2024). Human-Generative AI Collaborative Problem Solving Who Leads and How Students Perceive the Interactions. 2024 IEEE Conference on Artificial Intelligence (CAI), 680–686. https://doi.org/10.1109/CAI59869.2024.00133
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Tzu-Yun Chien, Po-Yu Chen, Muh-Chyun Tang

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
