Abstract
This paper examines the assumption that we continuously
while listening tend to focus on the most complex (least
repetitive) voice, experiencing this as foreground. We present
a computational model calculating the level of attraction
a voice in a score is likely to require at a given time.
The model is based on a music information complexity
measure. Calculating the complexity in each voice over a
short time window, the model predicts the most complex
voice to be the most interesting to listen to. The capability
of the model is evaluated in terms of melody prediction.
With promising results the predicted notes are compared to
melody annotated scores. We discuss how to measure music
complexity of pitch and rhythm, and examine which
factors are the most important in the perception of music.
Original language | English |
---|---|
Title of host publication | 9th International Conference on Music Perception and Cognition (ICMPC 2006), Bologna, Italy |
Number of pages | 5 |
Publication status | Published - 2006 |
Fields of science
- 102 Computer Sciences
- 102001 Artificial intelligence
- 102003 Image processing
- 102015 Information systems
- 202002 Audiovisual media