A Reference Process for Judging Reliability of Classification Results in Predictive Analytics

  • Simon Staudinger (Speaker)

Activity: Talk or presentationContributed talkscience-to-science

Description

Organizations employ data mining to discover patterns in historic data. The models that are learned from the data allow analysts to make predictions about future events of interest. Different global measures, e.g., accuracy, sensitivity, and specificity, are employed to evaluate a predictive model. In order to properly assess the reliability of an individual prediction for a specific input case, global measures may not suffice. In this paper, we propose a reference process for the development of predictive analytics applications that allow analysts to better judge the reliability of individual classification results. The proposed reference process is aligned with the CRISP-DM stages and complements each stage with a number of tasks required for reliability checking. We further explain two generic approaches that assist analysts with the assessment of reliability of individual predictions, namely perturbation and local quality measures. Keywords: Business Intelligence, Business Analytics, Decision Support Systems, Data Mining, CRISP-DM
Period07 Jul 2021
Event title10th International Conference on Data Science, Technology and Applications (DATA 2021), July 6-8, 2021
Event typeConference
LocationAustriaShow on map

Fields of science

  • 102028 Knowledge engineering
  • 102016 IT security
  • 102027 Web engineering
  • 502050 Business informatics
  • 503008 E-learning
  • 102 Computer Sciences
  • 102030 Semantic technologies
  • 102033 Data mining
  • 102010 Database systems
  • 102035 Data science
  • 102015 Information systems
  • 102025 Distributed systems

JKU Focus areas

  • Digital Transformation