Projects per year
Abstract
The increasing involvement of Artificial Intelligence (AI) in moral decision situations raises the possibility of users attributing blame to AI-based systems for negative outcomes. In two experimental studies with a total of
participants, we explored the attribution of blame and underlying moral reasoning. Participants had to classify mushrooms in pictures as edible or poisonous with support of an AI-based app. Afterwards, participants read a fictitious scenario in which a misclassification due to an erroneous AI recommendation led to the poisoning of a person. In the first study, increased system transparency through explainable AI techniques reduced blaming of AI. A follow-up study showed that attribution of blame to each actor in the scenario depends on their perceived obligation and capacity to prevent such an event. Thus, blaming AI is indirectly associated with mind attribution and blaming oneself is associated with the capability to recognize a wrong classification. We discuss implications for future research on moral cognition in the context of human–AI interaction.
| Original language | English |
|---|---|
| Pages (from-to) | 32412-32421 |
| Number of pages | 10 |
| Journal | Current Psychology |
| Volume | 43 |
| Issue number | 41 |
| DOIs | |
| Publication status | Published - Oct 2024 |
Fields of science
- 102013 Human-computer interaction
- 501002 Applied psychology
- 501012 Media psychology
- 202035 Robotics
JKU Focus areas
- Digital Transformation
Projects
- 1 Finished
-
HOXAI - Hands-on Explainable AI
Mara, M. (PI) & Streit, M. (PI)
01.10.2020 → 31.12.2022
Project: Funded research › Federal / regional / local authorities