Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2105.12449
Cited By
LMMS Reloaded: Transformer-based Sense Embeddings for Disambiguation and Beyond
26 May 2021
Daniel Loureiro
A. Jorge
Jose Camacho-Collados
Re-assign community
ArXiv
PDF
HTML
Papers citing
"LMMS Reloaded: Transformer-based Sense Embeddings for Disambiguation and Beyond"
21 / 21 papers shown
Title
Linguistic Interpretability of Transformer-based Language Models: a systematic review
Miguel López-Otal
Jorge Gracia
Jordi Bernad
Carlos Bobed
Lucía Pitarch-Ballesteros
Emma Anglés-Herrero
VLM
81
0
0
09 Apr 2025
Let's Play Mono-Poly: BERT Can Reveal Words' Polysemy Level and Partitionability into Senses
Aina Garí Soler
Marianna Apidianaki
MILM
220
68
0
29 Apr 2021
Provable Limitations of Acquiring Meaning from Ungrounded Form: What Will Future Language Models Understand?
William Merrill
Yoav Goldberg
Roy Schwartz
Noah A. Smith
56
68
0
22 Apr 2021
Probing Pretrained Language Models for Lexical Semantics
Ivan Vulić
Edoardo Ponti
Robert Litschko
Goran Glavaš
Anna Korhonen
KELM
59
238
0
12 Oct 2020
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Zhenzhong Lan
Mingda Chen
Sebastian Goodman
Kevin Gimpel
Piyush Sharma
Radu Soricut
SSL
AIMat
274
6,420
0
26 Sep 2019
Story Realization: Expanding Plot Events into Sentences
Prithviraj Ammanabrolu
Ethan Tien
W. Cheung
Z. Luo
William Ma
Lara J. Martin
Mark O. Riedl
NAI
52
69
0
08 Sep 2019
The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives
Elena Voita
Rico Sennrich
Ivan Titov
250
186
0
03 Sep 2019
GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge
Luyao Huang
Chi Sun
Xipeng Qiu
Xuanjing Huang
32
241
0
20 Aug 2019
Language Modelling Makes Sense: Propagating Representations through WordNet for Full-Coverage Word Sense Disambiguation
Daniel Loureiro
A. Jorge
38
138
0
24 Jun 2019
Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned
Elena Voita
David Talbot
F. Moiseev
Rico Sennrich
Ivan Titov
76
1,120
0
23 May 2019
What do you learn from context? Probing for sentence structure in contextualized word representations
Ian Tenney
Patrick Xia
Berlin Chen
Alex Jinpeng Wang
Adam Poliak
...
Najoung Kim
Benjamin Van Durme
Samuel R. Bowman
Dipanjan Das
Ellie Pavlick
159
853
0
15 May 2019
BERT Rediscovers the Classical NLP Pipeline
Ian Tenney
Dipanjan Das
Ellie Pavlick
MILM
SSeg
105
1,458
0
15 May 2019
SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
Alex Jinpeng Wang
Yada Pruksachatkun
Nikita Nangia
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
187
2,296
0
02 May 2019
Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Zihang Dai
Zhilin Yang
Yiming Yang
J. Carbonell
Quoc V. Le
Ruslan Salakhutdinov
VLM
146
3,714
0
09 Jan 2019
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
Mohammad Taher Pilehvar
Jose Camacho-Collados
131
478
0
28 Aug 2018
Dissecting Contextual Word Embeddings: Architecture and Representation
Matthew E. Peters
Mark Neumann
Luke Zettlemoyer
Wen-tau Yih
90
429
0
27 Aug 2018
Deep contextualized word representations
Matthew E. Peters
Mark Neumann
Mohit Iyyer
Matt Gardner
Christopher Clark
Kenton Lee
Luke Zettlemoyer
NAI
125
11,520
0
15 Feb 2018
Embedding Words and Senses Together via Joint Knowledge-Enhanced Training
Massimiliano Mancini
Jose Camacho-Collados
Ignacio Iacobacci
Roberto Navigli
52
78
0
08 Dec 2016
Enriching Word Vectors with Subword Information
Piotr Bojanowski
Edouard Grave
Armand Joulin
Tomas Mikolov
NAI
SSL
VLM
198
9,944
0
15 Jul 2016
Do Multi-Sense Embeddings Improve Natural Language Understanding?
Jiwei Li
Dan Jurafsky
60
234
0
02 Jun 2015
Distributed Representations of Words and Phrases and their Compositionality
Tomas Mikolov
Ilya Sutskever
Kai Chen
G. Corrado
J. Dean
NAI
OCL
306
33,445
0
16 Oct 2013
1