Learning Decision Catalogues for Situated Decision Making: The Case of Scoring Systems

Stefan Heid, Jonas Hanselle, Johannes Fürnkranz, Eyke Hüllermeier

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we formalize the problem of learning coherent collections of decision models, which we call decision catalogues, and illustrate it for the case where models are scoring systems. This problem is motivated by the recent rise of algorithmic decision-making and the idea to improve human decision-making through machine learning, in conjunction with the observation that decision models should be situated in terms of their complexity and resource requirements: Instead of constructing a single decision model and using this model in all cases, different models might be appropriate depending on the decision context. Decision catalogues are supposed to support a seamless transition from very simple, resource-efficient to more sophisticated but also more demanding models. We present a general algorithmic framework for inducing such catalogues from training data, which tackles the learning task as a problem of searching the space of candidate catalogues systematically and, to this end, makes use of heuristic search methods. We also present a concrete instantiation of this framework as well as empirical studies for performance evaluation, which, in a nutshell, show that greedy search is an efficient and hard-to-beat strategy for the construction of catalogues of scoring systems.
Original languageEnglish
Article number109190
Number of pages16
JournalInternational Journal of Approximate Reasoning
Volume171
Issue number109190
DOIs
Publication statusPublished - 2024

Fields of science

  • 102001 Artificial intelligence
  • 102015 Information systems
  • 102019 Machine learning
  • 102028 Knowledge engineering
  • 102033 Data mining
  • 102035 Data science
  • 509018 Knowledge management

JKU Focus areas

  • Digital Transformation

Cite this