
Interactive Visualization for Fostering Trust in ML
D. H. Chau, D. A. Keim, D. Oelke
DOI:10.4230/DagRep.12.8.103, 2023Interactive Visualization Machine Learning Accountability Artificial Intelligence Fairness Responsibility Understandability Trust
The use of artificial intelligence continues to impact a broad variety of domains, application areas, and people. However, interpretability, understandability, responsibility, accountability, and fairness of the algorithms’ results – all crucial for increasing humans’ trust into the systems – are still largely missing. The purpose of this seminar is to understand how these components factor into the holistic view of trust. Further, this seminar seeks to identify design guidelines and best practices for how to build interactive visualization systems to calibrate trust.
Related Publication
No publications found.