2017-08-25T17:17:10Z
2017-08-25T17:17:10Z
2016
In this paper, we investigate whether a neural network model can learn the meaning of natural language quantifiers (no,some and all) from their use in visual contexts. We show that memory networks perform well in this task, and that explicit counting is not necessary to the system’s performance, supporting psycholinguistic evidence on the acquisition of quantifiers.
This project has received funding from the European Unions Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 655577 (LOVe); ERC 2011 Starting Independent Research Grant n. 283554 (COMPOSES).
Object of conference
Published version
English
Language and vision; Grounding; Quantification; Distributed representations; Semantics; Computational semantics; Computational Linguistics; Natural Language Processing
ACL (Association for Computational Linguistics)
Proceedings of the 5th Workshop on Vision and Language (ACL 2016). Berlin: Association for Computational Linguistics; 2016. p. 75-79
info:eu-repo/grantAgreement/EC/H2020/655577
info:eu-repo/grantAgreement/EC/FP7/283554
© ACL, Creative Commons Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/