Towards Deep and Interpretable Rule Learning (invited abstract)

Research output: Chapter in Book/Report/Conference proceedingConference proceedings

Abstract

Inductive rule learning is concerned with the learning of classification rules from data. Learned rules are inherently interpretable and easy to implement, so they are very suitable for formulating learned models in many domains. Nevertheless, current rule learning algorithms have several shortcomings. First, with respect to the current praxis of equating high interpretability with low complexity, we argue that while shorter rules are important for discrimination, longer rules are often more interpretable than shorter rules, and that the tendency of current rule learning algorithms to strive for short and concise rules should be replaced with alternative methods that allow for longer concept descriptions. Second, we think that the main impediment of current rule learning algorithms is that they are not able to learn deeply structured rule sets, unlike the successful deep learning techniques. Both points are currently under investigation in our group, and we will show some preliminary results.
Original languageEnglish
Title of host publicationProceedings of the 22nd Conference Information Technologies - Applications and Theory (ITAT)
Editors Lucie Ciencialova, Martin Holena, Robert Jajcay, Tatiana Jajcayova, Frantisek Mraz, Dana Pardubska, Martin Platek
Place of PublicationZuberec, Slovakia
PublisherCEUR-WS.org
Number of pages1
Volume3226
Publication statusPublished - 2022

Publication series

NameCEUR Workshop Proceedings

Fields of science

  • 102019 Machine learning
  • 102033 Data mining

JKU Focus areas

  • Digital Transformation

Cite this