Document Type

Article

Publication Date

6-12-2025

Publication Title

UMAP Adjunct '25: Adjunct Proceedings of the 33rd ACM Conference on User Modeling, Adaptation and Personalization

First Page

192

Last Page

201

DOI

https://doi.org/10.1145/3708319.3733696

Abstract

Generative AI, particularly Large Language Models (LLMs), has revolutionized human-computer interaction by enabling the generation of nuanced, human-like text. This presents new opportunities, especially in enhancing explainability for AI systems like recommender systems, a crucial factor for fostering user trust and engagement. LLM-powered AI-Chatbots can be leveraged to provide personalized explanations for recommendations. Although users often find these chatbot explanations helpful, they may not fully comprehend the content. Our research focuses on assessing how well users comprehend these explanations and identifying gaps in understanding. We also explore the key behavioral differences between users who effectively understand AI-generated explanations and those who do not. We designed a three-phase user study with 17 participants to explore these dynamics. The findings indicate that the clarity and usefulness of the explanations are contingent on the user asking relevant follow-up questions and having a motivation to learn. Comprehension also varies significantly based on users’ educational backgrounds.

Comments

The pdf passed the Adobe accessibility checker prior to upload.

This article was published open access under the University of Nebraska at Omaha and ACM open access publishing agreement.

Creative Commons License

Creative Commons Attribution 4.0 License
This work is licensed under a Creative Commons Attribution 4.0 License.

Files over 3MB may be slow to open. For best results, right-click and select "save as..."

Share

COinS
 

Funded by the University of Nebraska at Omaha Open Access Fund