Efficient Large-Scale Audio Tagging Via Transformer-to-CNN Knowledge Distillation

Research output: Chapter in Book/Report/Conference proceedingConference proceedingspeer-review

Abstract

Audio Spectrogram Transformer models rule the field of Audio Tagging, outrunning previously dominating Convolutional Neural Networks (CNNs). Their superiority is based on the ability to scale up and exploit large-scale datasets such as AudioSet. However, Transformers are demanding in terms of model size and computational requirements compared to CNNs. We propose a training procedure for efficient CNNs based on offline Knowledge Distillation (KD) from high-performing yet complex transformers. The proposed training schema and the efficient CNN design based on MobileNetV3 results in models outperforming previous solutions in terms of parameter and computational efficiency and prediction performance. We provide models of different complexity levels, scaling from low-complexity models up to a new state-of-the-art performance of .483 mAP on AudioSet.
Original languageEnglish
Title of host publicationProceedinbgs of the International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023
Pagespp 1-5
Number of pages5
DOIs
Publication statusPublished - May 2023

Fields of science

  • 202002 Audiovisual media
  • 102 Computer Sciences
  • 102001 Artificial intelligence
  • 102003 Image processing
  • 102015 Information systems

JKU Focus areas

  • Digital Transformation

Cite this