Author ORCID Identifier
Document Type
Conference Proceeding
Publication Date
6-11-2024
Publication Title
dg.o '24: Proceedings of the 25th Annual International Conference on Digital Government Research
First Page
627
Last Page
636
DOI
https://doi.org/10.1145/3657054.3657128
Abstract
The advancement of generative AI, involving the utilization of large language models (LLMs) like ChatGPT to assess public opinion and sentiment, has become increasingly prevalent. However, this upsurge in usage raises significant questions about the transparency and interpretability of the predictions made by these LLM Models. Hence, this paper explores the imperative of ensuring transparency in the application of ChatGPT for public sentiment analysis. To tackle these challenges, we propose using a lexicon-based model as a surrogate to approximate both global and local predictions. Through case studies, we demonstrate how transparency mechanisms, bolstered by the lexicon-based model, can be seamlessly integrated into ChatGPT’s deployment for sentiment analysis. Drawing on the results of our study, we further discuss the implications for future research involving the utilization of LLMs in governmental functions, policymaking, and public engagement.
Recommended Citation
Chun-Hua Tsai, Gargi Nandy, Deanna House, and John Carroll. 2024. Ensuring Transparency in Using ChatGPT for Public Sentiment Analysis. In Proceedings of the 25th Annual International Conference on Digital Government Research (dg.o '24). Association for Computing Machinery, New York, NY, USA, 627–636. https://doi.org/10.1145/3657054.3657128
Files over 3MB may be slow to open. For best results, right-click and select "save as..."
Comments
© {Authors | ACM} {2024}. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in {Proceedings of the 25th Annual International Conference on Digital Government Research}, https://doi.org/10.1145/3657054.3657128