Author ORCID Identifier
Document Type
Article
Publication Date
8-2019
Publication Title
Computers in Human Behavior
Volume
97
First Page
250
Last Page
259
Abstract
Conversational agents (CAs) are an integral component of many personal and business interactions. Many recent advancements in CA technology have attempted to make these interactions more natural and human-like. However, it is currently unclear how human-like traits in a CA impact the way users respond to questions from the CA. In some applications where CAs may be used, detecting deception is important. Design elements that make CA interactions more human-like may induce undesired strategic behaviors from human deceivers to mask their deception. To better understand this interaction, this research investigates the effect of conversational skill—that is, the ability of the CA to mimic human conversation—from CAs on behavioral indicators of deception. Our results show that cues of deception vary depending on CA conversational skill, and that increased conversational skill leads to users engaging in strategic behaviors that are detrimental to deception detection. This finding suggests that for applications in which it is desirable to detect when individuals are lying, the pursuit of more human-like interactions may be counter-productive.
Recommended Citation
Schuetzler, Ryan M.; Grimes, G. Mark; and Giboney, Justin Scott, "The effect of conversational agent skill on user behavior during deception" (2019). Information Systems and Quantitative Analysis Faculty Publications. 74.
https://digitalcommons.unomaha.edu/isqafacpub/74
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Funded by the University of Nebraska at Omaha Open Access Fund
Comments
© 2019 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/BY/4.0/).
https://doi-org.leo.lib.unomaha.edu/10.1016/j.chb.2019.03.033