Reassuring, Misleading, Debunking: Comparing Effects of XAI Methods on Human Decisions

Research output: Contribution to journalArticlepeer-review

Abstract

Trust calibration is essential in AI-assisted decision-making. If human users understand the rationale on which an AI model has made a prediction, they can decide whether they consider this prediction reasonable. Especially in high-risk tasks such as mushroom hunting (where a wrong decision may be fatal), it is important that users make correct choices to trust or overrule the AI. Various explainable AI (XAI) methods are currently being discussed as potentially useful for facilitating understanding and subsequently calibrating user trust. So far, however, it remains unclear which approaches are most effective. In this paper, the effects of XAI methods on human AI-assisted decision-making in the high-risk task of mushroom picking were tested. For that endeavor, the effects of (i) Grad-CAM attributions, (ii) nearest-neighbor examples, and (iii) network-dissection concepts were compared in a between-subjects experiment with participants representing end-users of the system. In general, nearest-neighbor examples improved decision correctness the most. However, varying effects for different task items became apparent. All explanations seemed to be particularly effective when they revealed reasons to (i) doubt a specific AI classification when the AI was wrong and (ii) trust a specific AI classification when the AI was correct. Our results suggest that well-established methods, such as Grad-CAM attribution maps, might not be as beneficial to end users as expected and that XAI techniques for use in real-world scenarios must be chosen carefully.
Original languageEnglish
Article number16
Pages (from-to)1-36
Number of pages36
JournalACM Transactions on Interactive Intelligent Systems
Volume14
Issue number3
DOIs
Publication statusPublished - 03 Aug 2024

Fields of science

  • 102 Computer Sciences
  • 102003 Image processing
  • 102008 Computer graphics
  • 102015 Information systems
  • 102020 Medical informatics
  • 103021 Optics
  • 102013 Human-computer interaction
  • 501002 Applied psychology
  • 501012 Media psychology
  • 202035 Robotics
  • 102001 Artificial intelligence
  • 508016 Science communication
  • 509026 Digitalisation research

JKU Focus areas

  • Digital Transformation

Cite this