Author ORCID Identifier

Tsai -

Document Type


Publication Date



The growth in artificial intelligence (AI) technology has advanced many human-facing applications. The recommender system is one of the promising sub-domain of AI-driven application, which aims to predict items or ratings based on user preferences. These systems were empowered by large-scale data and automated inference methods that bring useful but puzzling suggestions to the users. That is, the output is usually unpredictable and opaque, which may demonstrate user perceptions of the system that can be confusing, frustrating or even dangerous in many life-changing scenarios. Adding controllability and explainability are two promising approaches to improve human interaction with AI. However, the varying capability of AI-driven applications makes the conventional design principles are less useful. It brings tremendous opportunities as well as challenges for the user interface and interaction design, which has been discussed in the human-computer interaction (HCI) community for over two decades. The goal of this dissertation is to build a framework for AI-driven applications that enables people to interact effectively with the system as well as be able to interpret the output from the system. Specifically, this dissertation presents the exploration of how to bring controllability and explainability to a hybrid social recommender system, included several attempts in designing user-controllable and explainable interfaces that allow the users to fuse multi-dimensional relevance and request explanations of the received recommendations. The works contribute to the HCI fields by providing design implications of enhancing human-AI interaction and gaining transparency of AI-driven applications.


Dissertation presented, argued, and awarded at the University of Pittsburgh

Copyright held by author

Files over 3MB may be slow to open. For best results, right-click and select "save as..."