Improved Chord Recognition by Combining Duration and Harmonic Language Models

Research output: Chapter in Book/Report/Conference proceedingConference proceedingspeer-review

Abstract

Chord recognition systems typically comprise an acoustic model that predicts chords for each audio frame, and a temporal model that casts these predictions into labelled chord segments. However, temporal models have been shown to only smooth predictions, without being able to incorporate musical information about chord progressions. Recent research discovered that it might be the low hierarchical level such models have been applied to (directly on audio frames) which prevents learning musical relationships, even for expressive models such as recurrent neural networks (RNNs). However, if applied on the level of chord sequences, RNNs indeed can become powerful chord predictors. In this paper, we disentangle temporal models into a harmonic language model—to be applied on chord sequences—and a chord duration model that connects the chord-level predictions of the language model to the frame-level predictions of the acoustic model. In our experiments, we explore the impact of each model on the chord recognition score, and show that using harmonic language and duration models improves the results.
Original languageEnglish
Title of host publicationProceedings of the 19th International Society for Music Information Retrieval Conference (ISMIR)
EditorsEmilia Gomez, Xiao Hu, Eric Humphrey, Emmanouil Benetos
Pages10-17
Number of pages8
ISBN (Electronic)9782954035123
Publication statusPublished - 2018

Fields of science

  • 202002 Audiovisual media
  • 102 Computer Sciences
  • 102001 Artificial intelligence
  • 102003 Image processing
  • 102015 Information systems

JKU Focus areas

  • Computation in Informatics and Mathematics
  • Engineering and Natural Sciences (in general)

Cite this