Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2209.08236
Cited By
Unsupervised Lexical Substitution with Decontextualised Embeddings
17 September 2022
Takashi Wada
Timothy Baldwin
Yuji Matsumoto
Jey Han Lau
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Unsupervised Lexical Substitution with Decontextualised Embeddings"
18 / 18 papers shown
Title
Always Keep your Target in Mind: Studying Semantics and Improving Performance of Neural Lexical Substitution
N. Arefyev
Boris Sheludko
Alexander Podolskiy
Alexander Panchenko
KELM
101
31
0
07 Jun 2022
Tracing Text Provenance via Context-Aware Lexical Substitution
Xi Yang
Jie Zhang
Kejiang Chen
Weiming Zhang
Zehua Ma
Feng Wang
Nenghai Yu
WaLM
34
60
0
15 Dec 2021
DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
Pengcheng He
Jianfeng Gao
Weizhu Chen
183
1,205
0
18 Nov 2021
LexSubCon: Integrating Knowledge from Lexical Resources into Contextual Embeddings for Lexical Substitution
George Michalopoulos
I. McKillop
Alexander Wong
Helen H. Chen
105
19
0
11 Jul 2021
Swords: A Benchmark for Lexical Substitution with Improved Data Coverage and Quality
Mina Lee
Chris Donahue
Robin Jia
Alexander Iyabor
Percy Liang
46
19
0
08 Jun 2021
SimCSE: Simple Contrastive Learning of Sentence Embeddings
Tianyu Gao
Xingcheng Yao
Danqi Chen
AILaw
SSL
276
3,411
0
18 Apr 2021
A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages
Pedro Ortiz Suarez
Laurent Romary
Benoît Sagot
64
231
0
11 Jun 2020
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
865
42,379
0
28 May 2020
MPNet: Masked and Permuted Pre-training for Language Understanding
Kaitao Song
Xu Tan
Tao Qin
Jianfeng Lu
Tie-Yan Liu
106
1,133
0
20 Apr 2020
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
M. Lewis
Yinhan Liu
Naman Goyal
Marjan Ghazvininejad
Abdel-rahman Mohamed
Omer Levy
Veselin Stoyanov
Luke Zettlemoyer
AIMat
VLM
263
10,851
0
29 Oct 2019
Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings
Gregor Wiedemann
Steffen Remus
Avi Chawla
Chris Biemann
71
176
0
23 Sep 2019
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Nils Reimers
Iryna Gurevych
1.3K
12,301
0
27 Aug 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
677
24,541
0
26 Jul 2019
SpanBERT: Improving Pre-training by Representing and Predicting Spans
Mandar Joshi
Danqi Chen
Yinhan Liu
Daniel S. Weld
Luke Zettlemoyer
Omer Levy
153
1,967
0
24 Jul 2019
XLNet: Generalized Autoregressive Pretraining for Language Understanding
Zhilin Yang
Zihang Dai
Yiming Yang
J. Carbonell
Ruslan Salakhutdinov
Quoc V. Le
AI4CE
236
8,447
0
19 Jun 2019
BERT Rediscovers the Classical NLP Pipeline
Ian Tenney
Dipanjan Das
Ellie Pavlick
MILM
SSeg
138
1,478
0
15 May 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.8K
95,175
0
11 Oct 2018
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
732
132,363
0
12 Jun 2017
1