Abstract
Contextual information of the listener is only slowly being
integrated into music retrieval and recommendation systems. Given the
enormous rise in mobile music consumption and the many sensors integrated
into today's smart-phones, at the same time, an unprecedented
source for user context data of different kinds is becoming available.
Equipped with a smart-phone application, which had been developed to
monitor contextual aspects of users when listening to music, we collected
contextual data of listening events for 48 users. About 100 different user
features, in addition to music meta-data have been recorded.
In this paper, we analyze the relationship between aspects of the user
context and music listening preference. The goals are to assess (i) whether
user context factors allow predicting the song, artist, mood, or genre of a
listened track, and (ii) which contextual aspects are most promising for
an accurate prediction. To this end, we investigate various classifiers to
learn relations between user context aspects and music meta-data. We
show that the user context allows to predict artist and genre to some
extent, but can hardly be used for song or mood prediction. Our study
further reveals that the level of listening activity has little influence on
the accuracy of predictions.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 21th International Conference on MultiMedia Modeling (MMM 2015) |
| Number of pages | 12 |
| Publication status | Published - 2015 |
Fields of science
- 202002 Audiovisual media
- 102 Computer Sciences
- 102001 Artificial intelligence
- 102003 Image processing
- 102015 Information systems
JKU Focus areas
- Computation in Informatics and Mathematics
- Engineering and Natural Sciences (in general)
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver