Abstract
Generative models of music audio are typically used to generate output based solely on a text prompt or melody. Boomerang sampling, recently proposed for the image domain, allows generating output close to an existing example, using any pretrained diffusion model. In this work, we explore its application in the audio domain as a tool for data augmentation or content manipulation. Specifically, implementing Boomerang sampling for Stable Audio Open, we augment training data for a state-of-the-art beat tracker, and attempt to replace musical instruments in recordings. Our results show that the rhythmic structure of existing examples is mostly preserved, that it improves performance of the beat tracker, but only in scenarios of limited training data, and that it can accomplish text-based instrument replacement on monophonic inputs. We publish our implementation to invite experiments on data augmentation in other tasks and explore further applications.
| Original language | English |
|---|---|
| Title of host publication | 22nd Sound and Music Computing Conference (SMC) |
| Number of pages | 7 |
| DOIs | |
| Publication status | Published - 08 Jul 2025 |
Fields of science
- 102018 Artificial neural networks
- 202037 Signal processing