Author ORCID Identifier

Tsai - https://orcid.org/0000-0001-9188-0362

Document Type

Article

Publication Date

6-23-2014

Abstract

This paper reviews logical approaches and challenges raised for explaining AI. We discuss the issues of presenting explanations as accurate computational models that users cannot understand or use. Then, we introduce pragmatic approaches that consider explanation a sort of speech act that commits to felicity conditions, including intelligibility, trustworthiness, and usefulness to the users. We argue Explainable AI (XAI) is more than a matter of accurate and complete computational explanation, that it requires pragmatics to address the issues it seeks to address. At the end of this paper, we draw a historical analogy to usability. This term was understood logically and pragmatically, but that has evolved empirically through time to become more prosperous and more functional.

Comments

Presented at Proceedings of the 2014 ACM conference on Web science


Copyright © 2014 Authors

COinS