To access the full text documents, please follow this link: http://hdl.handle.net/10230/42372

Putting words in context: LSTM language models and lexical ambiguity
Boleda, Gemma; Gulordava, Kristina; Aina, Laura
Comunicació presentada al 57th Annual Meeting of the Association for Computational Linguistic (ACL 2019), celebrat els dies 28 de juliol a 2 d'agost de 2019 a Florència, Itàlia.
In neural network models of language, words are commonly represented using context invariant representations (word embeddings) which are then put in context in the hidden layers. Since words are often ambiguous, representing the contextually relevant information is not trivial. We investigate how an LSTM language model deals with lexical ambiguity in English, designing a method to probe its hidden representations for lexical and contextual information about words. We find that both types of information are represented to a large extent, but also that there is room for improvement for contextual information.
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 715154), and from the Ramón y Cajal programme (grant RYC-2015-18907).
-Language models
-Lexical ambiguity
-Neural networks
© ACL, Creative Commons Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Conference Object
Article - Published version
ACL (Association for Computational Linguistics)
         

Show full item record

Related documents

Other documents of the same author

Boleda, Gemma; Aina, Laura; Silberer, Carina; Sorodoc, Ionut-Teodor; Westera, Matthijs
Boleda, Gemma; Aina, Laura; Silberer, Carina; Sorodoc, Ionut-Teodor; Westera, Matthijs
Aina, Laura; Bernardi, Raffaella; Fernández, Raquel
Boleda, Gemma; Gupta, Abhijeet; Padó, Sebastian
 

Coordination

 

Supporters