Abstract
Expert musicians can mould a musical piece to convey specific emotions that they intend to communicate. In this
paper, we place a mid-level features based music emotion model in this performer-to-listener communication scenario, and demonstrate via a small visualisation music
emotion decoding in real time. We also extend the existing set of mid-level features using
Original language | English |
---|---|
Title of host publication | Late-Breaking/Demo Papers, 23rd International Society for Music Information Retrieval Conference (ISMIR 2022) |
Number of pages | 5 |
Publication status | Published - 2022 |
Fields of science
- 202002 Audiovisual media
- 102 Computer Sciences
- 102001 Artificial intelligence
- 102003 Image processing
- 102015 Information systems
JKU Focus areas
- Digital Transformation