Abstract
This work presents a text-to-audio-retrieval system based on pre-trained text and spectrogram transformers. Our method projects
recordings and textual descriptions into a shared audio-caption space in which related examples from different modalities are close. Through a systematic analysis, we examine how each component of
the system influences retrieval performance. As a result, we identify two key components that play a crucial role in driving performance: the self-attention-based audio encoder for audio embedding
and the utilization of additional human-generated and synthetic data sets during pre-training. We further experimented with augmenting ClothoV2 captions with available keywords to increase their variety;
however, this only led to marginal improvements. Our system ranked first in the 2023’s DCASE Challenge, and it outperforms the current state of the art on the ClothoV2 benchmark by 5.6 pp. mAP@10
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 8th Detection andClassification of Acoustic Scenesand Events 2023 Workshop (DCASE2023) |
| Number of pages | 5 |
| Publication status | Published - 2023 |
Fields of science
- 202002 Audiovisual media
- 102 Computer Sciences
- 102001 Artificial intelligence
- 102003 Image processing
- 102015 Information systems
JKU Focus areas
- Digital Transformation