Title:
|
How to represent a word and predict it, too: improving tied architectures for language modelling
|
Author:
|
Gulordava, Kristina; Aina, Laura; Boleda, Gemma
|
Abstract:
|
Comunicació presentada a la EMNLP 2018 Conference on Empirical Methods in Natural Language Processing, celebrada a Brussel·les (Bèlgica), del 31 d'octubre al 4 de novembre de 2018. |
Abstract:
|
Recent state-of-the-art neural language models share the representations of words given by the input and output mappings. We propose a simple modification to these architectures that decouples the hidden state from the word embedding prediction. Our architecture leads to comparable or better results compared to previous tied models and models without tying, with a much smaller number of parameters. We also extend our proposal to word2vec models, showing that tying is appropriate for general word prediction tasks. |
Abstract:
|
This project as received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 715154), and from the Ramón y Cajal programme (grant RYC-2015-18907) and the Catalan government (SGR 2017 1575). |
Subject(s):
|
-Language models -Word embeddings -Neural networks -Tied representations |
Rights:
|
© ACL, Creative Commons Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/ |
Document type:
|
Conference Object Article - Published version |
Published by:
|
ACL (Association for Computational Linguistics)
|
Share:
|
|