Musical instrument recognition in user-generated videos using a multimodal convolutional neural network architecture

dc.contributor.author
Slizovskaia, Olga
dc.contributor.author
Gómez Gutiérrez, Emilia, 1975-
dc.contributor.author
Haro Ortega, Gloria
dc.date.issued
2018-12-04T09:28:59Z
dc.date.issued
2018-12-04T09:28:59Z
dc.date.issued
2017
dc.identifier
Slizovskaia O, Gómez E, Haro G. Musical instrument recognition in user-generated videos using a multimodal convolutional neural network architecture. In: ICMR 2017. ACM International Conference on Multimedia Retrieval; 2017 Jun 6-9; Bucharest, Romania. New York (NY): ACM; 2017. p. 226-32. DOI: 10.1145/3078971.3079002
dc.identifier
978-1-4503-4701-3
dc.identifier
http://hdl.handle.net/10230/35952
dc.identifier
http://dx.doi.org/10.1145/3078971.3079002
dc.description.abstract
Comunicació presentada a la International Conference on Multimedia Retrieval celebrada del 6 al 9 de juny de 2017 a Bucarest, Romania.
dc.description.abstract
This paper presents a method for recognizing musical instruments in user-generated videos. Musical instrument recognition from music signals is a well-known task in the music information retrieval (MIR) field, where current approaches rely on the analysis of the good-quality audio material. This work addresses a real-world scenario with several research challenges, i.e. the analysis of user-generated videos that are varied in terms of recording conditions and quality and may contain multiple instruments sounding simultaneously and background noise. Our approach does not only focus on the analysis of audio information, but we exploit the multimodal information embedded in the audio and visual domains. In order to do so, we develop a Convolutional Neural Network (CNN) architecture which combines learned representations from both modalities at a late fusion stage. Our approach is trained and evaluated on two large-scale video datasets: YouTube-8M and FCVID. The proposed architectures demonstrate state-of-the-art results in audio and video object recognition, provide additional robustness to missing modalities, and remains computationally cheap to train.
dc.description.abstract
is work is partly supported by the Spanish Ministry of Economy and Competitiveness under the Maria de Maeztu Units of Excellence Programme (MDM-2015-0502), the CASAS Spanish research project (TIN2015-70816-R), and project TIN2015-70410-C2- 1-R (MINECO/FEDER, UE). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X GPU used for this research.
dc.format
application/pdf
dc.format
application/pdf
dc.language
eng
dc.publisher
ACM Association for Computer Machinery
dc.relation
ICMR 2017. ACM International Conference on Multimedia Retrieval; 2017 Jun 6-9; Bucharest, Romania. New York (NY): ACM; 2017.
dc.relation
info:eu-repo/grantAgreement/ES/1PE/TIN2015-70816-R
dc.relation
info:eu-repo/grantAgreement/ES/1PE/TIN2015-70410-C2-1-R
dc.rights
© 2017 Association for Computing Machinery
dc.rights
info:eu-repo/semantics/openAccess
dc.subject
Multimodal musical instrument classification
dc.subject
Convolutional neural networks
dc.subject
Multimodal video analysis
dc.subject
Feature fusion
dc.subject
Multimedia information retrieval
dc.title
Musical instrument recognition in user-generated videos using a multimodal convolutional neural network architecture
dc.type
info:eu-repo/semantics/conferenceObject
dc.type
info:eu-repo/semantics/acceptedVersion


Ficheros en el ítem

FicherosTamañoFormatoVer

No hay ficheros asociados a este ítem.

Este ítem aparece en la(s) siguiente(s) colección(ones)