Biasing Rule-Based Explanations Towards User Preferences

Research output: Contribution to journalArticlepeer-review

Abstract

With the growing prevalence of Explainable AI (XAI), the effectiveness, transparency, usefulness, and trustworthiness of explanations have come into focus. However, recent work in XAI often still falls short in terms of integrating human knowledge and preferences into the explanatory process. In this paper, we aim to bridge this gap by proposing a novel method, which personalizes rule-based explanations to the needs of different users based on their expertise and background knowledge, formalized as a set of weighting functions over a knowledge graph. While we assume that user preferences are provided as a weighting function, our focus is on generating explanations tailored to the user’s background knowledge. The method transforms rule-based interpretable models into personalized explanations considering user preferences in terms of the granularity of knowledge. Evaluating our approach on multiple datasets demonstrates that the generated explanations are highly aligned with simulated user preferences compared to non-personalized explanations.
Original languageEnglish
Article number535
Pages (from-to)535
Number of pages18
JournalInformation
Volume16
Issue number7
DOIs
Publication statusPublished - 24 Jun 2025

Fields of science

  • 102001 Artificial intelligence
  • 102019 Machine learning
  • 102028 Knowledge engineering

JKU Focus areas

  • Digital Transformation

Cite this