Abstract
Structure learning is the identification of the structure of
graphical models based solely on observational data and is NP-hard. An
important component of many structure learning algorithms are heuristics
or bounds to reduce the size of the search space. We argue that
variable relevance rankings that can be easily calculated for many standard
regression models can be used to improve the efficiency of structure
learning algorithms. In this contribution, we describe measures that
can be used to evaluate the quality of variable relevance rankings, especially
the well-known normalized discounted cumulative gain (NDCG).
We evaluate and compare different regression methods using the proposed
measures and a set of linear and non-linear benchmark problems
Original language | English |
---|---|
Title of host publication | Lecture Notes in Computer Science |
Number of pages | 8 |
Publication status | Published - 2017 |
Fields of science
- 102 Computer Sciences
- 102001 Artificial intelligence
- 102011 Formal languages
- 102022 Software development
- 102031 Theoretical computer science
- 603109 Logic
- 202006 Computer hardware
JKU Focus areas
- Computation in Informatics and Mathematics