First impressions of a financial AI assistant: Differences between high trust and low trust users

Simon Schreibelmayr, Laura Moradbakhti, Martina Mara

Research output: Contribution to journalArticlepeer-review

Abstract

Calibrating appropriate trust of non-expert users in artificial intelligence (AI) systems is a challenging yet crucial task. To align subjective levels of trust with the objective trustworthiness of a system, users need information about its strengths and weaknesses. The specific explanations that help individuals avoid over- or under-trust may vary depending on their initial perceptions of the system. In an online study, 127 participants watched a video of a financial AI assistant with varying degrees of decision agency. They generated 358 spontaneous text descriptions of the system and completed standard questionnaires from the Trust in Automation and Technology Acceptance literature (including perceived system competence, understandability, human-likeness, uncanniness, intention of developers, intention to use, and trust). Comparisons between a high trust and a low trust user group revealed significant differences in both open-ended and closed-ended answers. While high trust users characterized the AI assistant as more useful, competent, understandable, and humanlike, low trust users highlighted the system's uncanniness and potential dangers. Manipulating the AI assistant's agency had no influence on trust or intention to use. These findings are relevant for effective communication about AI and trust calibration of users who differ in their initial levels of trust.
Original languageEnglish
Article number1241290
Number of pages13
JournalFrontiers in Artificial Intelligence
Volume6
DOIs
Publication statusPublished - Oct 2023

Fields of science

  • 102013 Human-computer interaction
  • 501002 Applied psychology
  • 501012 Media psychology
  • 202035 Robotics

JKU Focus areas

  • Digital Transformation

Cite this