A Reference Process for Assessing the Reliability of Predictive Analytics Results

Research output: Contribution to journalArticlepeer-review

Abstract

Organizations employ data mining to discover patterns in historic data in order to learn predictive models. Depending on the predictive model the predictions may be more or less accurate, raising the question about the reliability of individual predictions. This paper proposes a reference process aligned with the CRISP-DM to enable the assessment of reliability of individual predictions obtained from a predictive model. The reference process describes activities along the different stages of the development process required to establish a reliability assessment approach for a predictive model. The paper then presents in more detail two specific approaches for reliability assessment: perturbation of input cases and local quality measures. Furthermore, this paper describes elements of a knowledge graph to capture important metadata about the development process and training data. The knowledge graph serves to properly configure and employ the reliability assessment approaches.
Original languageEnglish
Article number563
Number of pages27
JournalSN Computer Science
Volume5
Issue number563
DOIs
Publication statusPublished - May 2024

Fields of science

  • 102 Computer Sciences
  • 102010 Database systems
  • 102015 Information systems
  • 102016 IT security
  • 102025 Distributed systems
  • 102027 Web engineering
  • 102028 Knowledge engineering
  • 102030 Semantic technologies
  • 102033 Data mining
  • 102035 Data science
  • 509026 Digitalisation research
  • 502050 Business informatics
  • 502058 Digital transformation
  • 503008 E-learning

JKU Focus areas

  • Digital Transformation

Cite this