Abstract
Detecting speech and music is an elementary step in extracting information
from radio broadcasts. Existing solutions either rely on
general-purpose audio features, or build on features specifically
engineered for the task. Interpreting spectrograms as images, we
can apply unsupervised feature learning methods from computer
vision instead. In this work, we show that features learned by a
mean-covariance Restricted Boltzmann Machine partly resemble
engineered features, but outperform three hand-crafted feature sets
in speech and music detection on a large corpus of radio recordings.
Our results demonstrate that unsupervised learning is a powerful
alternative to knowledge engineering.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 15th Int. Conference on Digital Audio Effects (DAFx-12), |
| Number of pages | 8 |
| Publication status | Published - Sept 2012 |
Fields of science
- 102 Computer Sciences
- 102001 Artificial intelligence
- 102003 Image processing
JKU Focus areas
- Computation in Informatics and Mathematics
- Engineering and Natural Sciences (in general)