Online active learning for an evolving fuzzy neural classifier based on data density and specificity

Paulo De Campos Souza, Edwin Lughofer

Research output: Contribution to journalArticlepeer-review

Abstract

Evolving fuzzy neural classifiers are incremental, adaptive models that use new samples to update the architecture and parameters of the models with new incoming data samples, typically occurring in form of data streams for classification problems. Most of the techniques assume that the target labels are permanently given as updating their structures and parameters in a fully supervised manner. This paper aims to implement ideas based on the concept of active learning in order to select the data most relevant for updating the model. This may greatly reduce annoying and costly labeling efforts for users/operators in an online system. Therefore, we propose an online active learning (oAL) methodology, which is closely linked to the internal evolving learning engine for fuzzy neurons, which is based on incremental data-cloud formation. It is thus based on the evaluation of the specificity of the current clouds, and especially by the change in their specificity with new (unsupervised) samples, in order to identify those samples carrying relevant information to the update of previously formed clouds. This is combined with the unsupervised cloud evolution criterion, which upon its fulfillment indicates a new knowledge contained in the data for which the class response needs to be known (thus should be selected for labeling feedback). In synergy to the evolving fuzzy neural classifier, it acts in an incremental single-pass manner, not using any past samples, which makes it extremely fast, as only fuzzy neurons attached to a new sample need to be checked for the degree of their specificity change. To prove the technique’s efficiency, tests with binary classification streams commonly used by the machine learning community were conducted for evaluation purposes. The number of supervised samples for model updates could be significantly reduced with a low or even negligible decrease in the classification accuracy trends, while a random selection of samples (with the same percentages as selected by our oAL approach) showed large performance downtrends. Furthermore, a very similar number of rule evolution trends could be observed with different percentages of selected samples, which indicates good robustness of our method with respect to knowledge extraction (as non-changing).
Original languageEnglish
Pages (from-to)269-286
Number of pages18
JournalNeurocomputing
Volume512
DOIs
Publication statusPublished - Nov 2022

Fields of science

  • 101 Mathematics
  • 101004 Biomathematics
  • 101013 Mathematical logic
  • 101014 Numerical mathematics
  • 101020 Technical mathematics
  • 101024 Probability theory
  • 101027 Dynamical systems
  • 101028 Mathematical modelling
  • 102001 Artificial intelligence
  • 102003 Image processing
  • 102009 Computer simulation
  • 102019 Machine learning
  • 102023 Supercomputing
  • 102035 Data science
  • 202027 Mechatronics
  • 206001 Biomedical engineering
  • 206003 Medical physics

JKU Focus areas

  • Digital Transformation

Cite this