Evolving multi-user fuzzy classifier system with advanced explainability and interpretability aspects

Edwin Lughofer, Mahardhika Pratama

Research output: Contribution to journalArticlepeer-review

Abstract

Evolving classifiers and especially evolving fuzzy classifiers have been established as a prominent technique for addressing the recent demands in building classifiers in an incremental online manner, based on target labels typically provided by a single user. We present a framework for an interactive evolving multi-user fuzzy classifier system with advanced explainability and interpretability aspects (EFCS-MU-AEI). Multiple users may provide their label feedback based on which own users’ classifiers are incrementally trained with evolving learning concepts. Its classification outputs are amalgamated by a specific ensembling scheme, respecting (i.) uncertainty in the class labels due to labeling ambiguities among the users and (ii.) different experience levels of the users as voting weights. A major focus thereby is concentrated on the explainability of classification outputs for the purpose to increase the quality (consistency and certainty) of the user (labeling) feedbacks. It is established to show reasons why certain decisions have been made and with which certainty levels and rule coverage degrees. The reasons are deduced from the most active rules, which are reduced in their length by a statistically-motivated instance-based feature importance level concept. Another major focus lies on the interpretability of extracted rules in order to represent understandable knowledge contained in the classification problem and especially to realize the labeling behaviors of different users for different parts of the feature space (= different sample groups). A specific incremental feature weighting technique, respecting label uncertainties from multiple users and sample forgetting weights (for handling drifts), as well as a fuzzy set merging process are proposed to aim for a high compactness and transparency of the rules. Our approach was evaluated based on a visual inspection scenario. It could be shown that the explanations of the classifier decisions in fact significantly improved the labeling behavior of three single users in terms of showing higher accumulated accuracy trends. Feature weights integration into the classifier updates could achieve transparent rules with final essential four features to describe the classification problem. Based on this description, it turned out in which ways, i.e. for which sample groups, the users with lower experience levels should be taught to improve their understanding about the process.
Original languageEnglish
Pages (from-to)458-476
Number of pages19
JournalInformation Fusion
Volume91
DOIs
Publication statusPublished - 2023

Fields of science

  • 101 Mathematics
  • 101004 Biomathematics
  • 101013 Mathematical logic
  • 101014 Numerical mathematics
  • 101020 Technical mathematics
  • 101024 Probability theory
  • 101027 Dynamical systems
  • 101028 Mathematical modelling
  • 102001 Artificial intelligence
  • 102003 Image processing
  • 102009 Computer simulation
  • 102019 Machine learning
  • 102023 Supercomputing
  • 102035 Data science
  • 202027 Mechatronics
  • 206001 Biomedical engineering
  • 206003 Medical physics

JKU Focus areas

  • Digital Transformation

Cite this