{"id":1072,"date":"2025-08-15T08:16:09","date_gmt":"2025-08-15T08:16:09","guid":{"rendered":"https:\/\/www.visible-language.org\/journal\/?p=1072"},"modified":"2025-08-15T14:32:29","modified_gmt":"2025-08-15T14:32:29","slug":"issue-59-2-addressing-uncertainty-in-llm-outputs-for-trust-calibration-through-visualization-and-user-interface-design","status":"publish","type":"post","link":"https:\/\/www.visible-language.org\/journal\/issue-59-2-addressing-uncertainty-in-llm-outputs-for-trust-calibration-through-visualization-and-user-interface-design\/","title":{"rendered":"Addressing Uncertainty in LLM Outputs for Trust Calibration Through Visualization and User Interface Design"},"content":{"rendered":"<div class=\"sitecontainer\">\n<div class=\"pagecontainer\">\n<article class=\"vj-article\">\n<div class=\"articlesidebar\">\n<h5>Issue 59.2<\/h5>\n<p><a href=\"https:\/\/www.visible-language.org\/journal\/issue-59-2-reflecting-on-the-august-2025-issue-considerations-nowadays-and-implications-for\">Reflecting on the August 2025 Issue \u2014 Considerations Nowadays and Implications For<\/a><\/p>\n<p><a href=\"https:\/\/www.visible-language.org\/journal\/issue-59-2-research-led-pluralist-typographic-practices-case-studies-from-south-asia\">Research-Led Pluralist Typographic Practices: Case Studies from South Asia<\/a><\/p>\n<p><a href=\"https:\/\/www.visible-language.org\/journal\/issue-59-2-the-role-of-text-alignment-on-response-speed-and-accuracy-when-reading-chinese-english-bilingual-traffic-signs\">The Role of Text Alignment on Response Speed and Accuracy When Reading Chinese-English Bilingual Traffic Signs<\/a><\/p>\n<p><a href=\"https:\/\/www.visible-language.org\/journal\/issue-59-2-breaking-images-a-method-for-improving-design-students-visual-literacy\">Breaking Images: A Method for Improving Design Students\u2019 Visual Literacy<\/a><\/p>\n<p><a href=\"https:\/\/www.visible-language.org\/journal\/issue-59-2-addressing-uncertainty-in-llm-outputs-for-trust-calibration-through-visualization-and-user-interface-design\">Addressing Uncertainty in LLM Outputs for Trust Calibration Through Visualization and User Interface Design<\/a><\/p>\n<p><a href=\"https:\/\/www.visible-language.org\/journal\/issue-59-2-dfi-a-seat-at-the-table-designing-for-ai-with-strategy-vision-and-collaboration\">A Seat at the Table: Designing for AI with Strategy, Vision, and Collaboration<\/a><\/p>\n<p><a href=\"https:\/\/www.visible-language.org\/journal\/issue-59-2-dfi-the-changing-definition-of-designers-in-the-age-of-generative-ai\">The Changing Definition of Designers in the Age of Generative AI<\/a><\/p>\n<p><a href=\"https:\/\/www.visible-language.org\/journal\/issue-59-2-dfi-the-human-touchpoint-recommendations-for-thoughtful-ai-feature-design\">The Human Touch(point): Recommendations for Thoughtful AI Feature Design<\/a><\/p>\n<p><a href=\"https:\/\/www.visible-language.org\/Issue-59-2\/Visible-Language-59-2.pdf\" target=\"_blank\">Download Issue 59.2 \u27a4<\/a><\/p>\n<\/div>\n<div class=\"articlecontent\">\n<h1>Addressing Uncertainty in LLM Outputs for Trust Calibration Through Visualization and User Interface Design<\/h1>\n<h3>Helen Armstrong<sup>a<\/sup>, Ashley L. Anderson<sup>a,b<\/sup>, Rebecca Planchart<sup>a<\/sup>, Kweku Baidoo<sup>a<\/sup>, and Matthew Peterson<sup>a<\/sup><\/h3>\n<h4 style=\"line-height: 1.5;\">a: Department of Graphic Design and Industrial Design, North Carolina State University, Raleigh, NC, USA; b: School of Visual Arts, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA<br \/>Corresponding author: Helen Armstrong (hsarmstr[at]ncsu.edu)<\/h4>\n<div class=\"abstractbox\">\n<p><span class=\"smallblueheading\">Abstract:<\/span> Large language models (LLMs) are becoming ubiquitous in knowledge work. However, the uncertainty inherent to LLM summary generation limits the efficacy of human-machine teaming, especially when users are unable to properly calibrate their trust in automation. Visual conventions for signifying uncertainty and interface design strategies for engaging users are needed to realize the full potential of LLMs. We report on an exploratory interdisciplinary project that resulted in four main contributions to explainable artificial intelligence in and beyond an intelligence analysis context. First, we provide and evaluate eight potential visual conventions for representing uncertainty in LLM summaries. Second, we describe a framework for uncertainty specific to LLM technology. Third, we specify 10 features for a proposed LLM validation system \u2014 the Multiple Agent Validation System (MAVS) \u2014 that utilizes the visual conventions, the framework, and three virtual agents to aid in language analysis. Fourth, we provide and describe four MAVS prototypes, one as an interactive simulation interface and the others as narrative interface videos. All four utilize a language analysis scenario to educate users on the potential of LLM technology in human-machine teams. To demonstrate applicability of the contributions beyond intelligence analysis, we also consider LLM-derived uncertainty in clinical decision-making in medicine and in climate forecasting. Ultimately, this investigation makes a case for the importance of visual and interface design in shaping the development of LLM technology.<\/p>\n<p><span class=\"smallblueheading\">Implications for practice:<\/span> This article focuses on the role and responsibilities of the emerging AI designer in modern product design and development. The distinction between AI for efficiency and AI for augmentation (Section 2.3) suggests a comprehensive framework that can help AI designers apply these categories and advocate for user and societal needs in the rush to incorporate AI functions into existing services. The discussion of user feedback loops (Section 2.6) characterizes good feedback systems as being granular, contextual, and actionable, with a palette of available UX patterns including inline corrections for refinement, transparent confidence scores, and feedback tagging. Empirical research is needed to provide AI designers with a generalized understanding of how these UI characteristics and UX patterns impact human understanding, and how they interact.<\/p>\n<\/div>\n<div class=\"keywordsbox\">\n<p><span class=\"smallblueheading\">Keywords:<\/span> explainable AI; human-machine teaming; intelligence analysis; large language models; trust calibration; uncertainty; user interface design; visual representation<\/p>\n<\/div>\n<p><a class=\"viewarticlebtn\" href=\"https:\/\/www.visible-language.org\/Issue-59-2\/addressing-uncertainty-in-llm-outputs-for-trust-calibration-through-visualization-and-user-interface-design.pdf\" target=\"_blank\">Download PDF<\/a><\/p>\n<div class=\"articlepdfviewer\">\n<object \ndata=\"https:\/\/www.visible-language.org\/Issue-59-2\/addressing-uncertainty-in-llm-outputs-for-trust-calibration-through-visualization-and-user-interface-design.pdf\" type=\"application\/pdf\" width=\"100%\" height=\"100%\"><br \/>\n<iframe loading=\"lazy\" src=\"https:\/\/www.visible-language.org\/Issue-59-2\/addressing-uncertainty-in-llm-outputs-for-trust-calibration-through-visualization-and-user-interface-design.pdf\" width=\"100%\" height=\"100%\" style=\"border: none;\"><br \/>\n<\/iframe><\/object>\n<\/div>\n<div class=\"authorbox\">\n<p><span class=\"smallblueheading\">Authors<\/span><\/p>\n<p><strong>Helen Armstrong<\/strong> is a professor of graphic and experience design and the director of the MGXD program at NC State University. Her research focuses on digital rights, human-machine teaming, and accessible design. Armstrong authored Graphic Design Theory; Digital Design Theory; and co-authored Participate: Designing with User-Generated Content. Her recent book, Big Data, Big Design: Why Designers Should Care About Artificial Intelligence, demystifies AI \u2014 specifically machine learning \u2014 while inspiring designers to harness this technology and establish leadership via thoughtful human-centered design. Armstrong is a past member of the AIGA National Board of Directors, the editorial board of Design and Culture, and a former chair of the AIGA Design Educators Community.<\/p>\n<p><strong>Ashley L. Anderson<\/strong> is an assistant professor of graphic design at Virginia Tech and a PhD in Design candidate at NC State University. Her research focuses on human-centered design and visual representation, particularly in the context of mental health and psychological intervention. She examines how design can shape and enhance the theories, processes, and methods used in psychological intervention.<\/p>\n<p><strong>Rebecca Planchart<\/strong> is a product designer at Pendo.io, a software experience management solution, where she supports enterprise platform and conversational AI initiatives. Her past research explored explainability and trust calibration in AI systems through UX and UI strategies. She is particularly interested in leveraging explainable AI to support users in high-stakes decision-making contexts.<\/p>\n<p><strong>Kweku Baidoo<\/strong> is a lecturer in graphic and experience design at NC State University. His work explores trust-centered design and visual strategies that support human understanding of complex AI systems. He is particularly interested in how AI-assisted decision-making can be designed to enhance appropriate user trust and performance in high-stakes domains such as healthcare.<\/p>\n<p><strong>Matthew Peterson<\/strong> is an associate professor of graphic and experience design at NC State University. His research focuses on visual representation in user interface design, recently including the facilitation of AI in intelligence analysis workflows through human-machine teaming, text-image integration in immersive user information systems, and the facilitation of scale cognition and numeracy in virtual environments.<\/p>\n<\/div>\n<\/div>\n<p><a class=\"viewarticlebtn\" href=\"https:\/\/www.visible-language.org\/Issue-59-2\/addressing-uncertainty-in-llm-outputs-for-trust-calibration-through-visualization-and-user-interface-design.pdf\" target=\"_blank\">Download PDF<\/a><br \/>\n<\/article>\n<div class=\"articlecitebox\">\n<div>\n<p class=\"blueurllink\">DOI being generated<\/p>\n<p><strong>Cite this article:<\/strong><br \/>Armstrong, H., Anderson, A. L., Planchart, R., Baidoo, K., &#038; Peterson, M. (2025). Addressing uncertainty in LLM outputs for trust calibration through visualization and user interface design. Visible Language, 59(2), 176\u2013217. https:\/\/www.visible-language.org\/journal\/issue-59-2-addressing-uncertainty-in-llm-outputs-for-trust-calibration-through-visualization-and-user-interface-design<\/p>\n<\/div>\n<div>\n<p>First published online August 15, 2025. \u00a9 2025 Visible Language \u2014 this article is open access, published under the CC BY-NC-ND 4.0 license.<\/p>\n<p><pre>https:\/\/www.visible-language.org\/journal<\/pre>\n<\/p>\n<p><span class=\"vlconsortiumheading\"><strong>Visible Language Consortium:<\/strong><\/span><br \/>University of Leeds (UK)<br \/>University of Cincinnati (USA)<br \/>North Carolina State University (USA)<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Issue 59.2 Reflecting on the August 2025 Issue \u2014 Considerations Nowadays and Implications For Research-Led Pluralist Typographic Practices: Case Studies from South Asia The Role of Text Alignment on Response Speed and Accuracy When Reading Chinese-English Bilingual Traffic Signs Breaking Images: A Method for Improving Design Students\u2019 Visual Literacy Addressing Uncertainty in LLM Outputs for &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.visible-language.org\/journal\/issue-59-2-addressing-uncertainty-in-llm-outputs-for-trust-calibration-through-visualization-and-user-interface-design\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Addressing Uncertainty in LLM Outputs for Trust Calibration Through Visualization and User Interface Design&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7,9],"tags":[],"class_list":["post-1072","post","type-post","status-publish","format-standard","hentry","category-issue-59-2","category-research-article","entry"],"_links":{"self":[{"href":"https:\/\/www.visible-language.org\/journal\/wp-json\/wp\/v2\/posts\/1072","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.visible-language.org\/journal\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.visible-language.org\/journal\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.visible-language.org\/journal\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.visible-language.org\/journal\/wp-json\/wp\/v2\/comments?post=1072"}],"version-history":[{"count":17,"href":"https:\/\/www.visible-language.org\/journal\/wp-json\/wp\/v2\/posts\/1072\/revisions"}],"predecessor-version":[{"id":1184,"href":"https:\/\/www.visible-language.org\/journal\/wp-json\/wp\/v2\/posts\/1072\/revisions\/1184"}],"wp:attachment":[{"href":"https:\/\/www.visible-language.org\/journal\/wp-json\/wp\/v2\/media?parent=1072"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.visible-language.org\/journal\/wp-json\/wp\/v2\/categories?post=1072"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.visible-language.org\/journal\/wp-json\/wp\/v2\/tags?post=1072"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}