dc.contributor
Universitat Politècnica de Catalunya. Departament de Teoria del Senyal i Comunicacions
dc.contributor
Massachusetts Institute of Technology
dc.contributor
Torralba, Antonio
dc.contributor
Giró Nieto, Xavier
dc.contributor.author
Surís Coll-Vinent, Dídac
dc.date.issued
2018-10-17
dc.identifier
https://hdl.handle.net/2117/127077
dc.identifier
ETSETB-230.134403
dc.description.abstract
To be defined at MIT.
dc.description.abstract
Deep learning models, and more specifically computer vision systems, have achieved great results in recent years. However, the interpretability and understanding of these models is still in its early stages. Interpretability can be approached from a low-level or filter level perspective, but the representations learned by neural networks encompass a much higher-level knowledge that has to be approached from a semantic point of view, with concepts in mind. The goal of this project is to investigate the concepts neural networks learn implicitly when they are trained in an unsupervised scenario, with a special focus on the multimodal matching of words to visual objects and attributes. We study how we can detect these concepts, as well as how we can force the networks to learn more meaningful ones, both providing analytical insights and getting practical results.
dc.format
application/pdf
dc.format
application/zip
dc.publisher
Universitat Politècnica de Catalunya
dc.rights
Restricted access - author's decision
dc.subject
Àrees temàtiques de la UPC::Enginyeria de la telecomunicació
dc.subject
Neural networks (Computer science)
dc.subject
Computer vision
dc.subject
Neural networks
dc.subject
vision and language
dc.subject
convolutional networks
dc.subject
multimodal learning
dc.subject
unsupervised learning
dc.subject
Xarxes neuronals (Informàtica)
dc.subject
Visió per ordinador
dc.title
How concepts emerge in neural networks