Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1901.05287
Cited By
Assessing BERT's Syntactic Abilities
16 January 2019
Yoav Goldberg
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Assessing BERT's Syntactic Abilities"
38 / 138 papers shown
Title
On the Robustness of Language Encoders against Grammatical Errors
Fan Yin
Quanyu Long
Tao Meng
Kai-Wei Chang
39
34
0
12 May 2020
A Systematic Assessment of Syntactic Generalization in Neural Language Models
Jennifer Hu
Jon Gauthier
Peng Qian
Ethan Gotlieb Wilcox
R. Levy
ELM
37
214
0
07 May 2020
Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words
Josef Klafka
Allyson Ettinger
60
43
0
04 May 2020
The Sensitivity of Language Models and Humans to Winograd Schema Perturbations
Mostafa Abdou
Vinit Ravishankar
Maria Barrett
Yonatan Belinkov
Desmond Elliott
Anders Søgaard
ReLM
LRM
62
34
0
04 May 2020
When BERT Plays the Lottery, All Tickets Are Winning
Sai Prasanna
Anna Rogers
Anna Rumshisky
MILM
30
187
0
01 May 2020
Cross-Linguistic Syntactic Evaluation of Word Prediction Models
Aaron Mueller
Garrett Nicolai
Panayiota Petrou-Zeniou
N. Talmina
Tal Linzen
27
55
0
01 May 2020
Quantifying the Contextualization of Word Representations with Semantic Class Probing
Mengjie Zhao
Philipp Dufter
Yadollah Yaghoobzadeh
Hinrich Schütze
30
27
0
25 Apr 2020
Syntactic Data Augmentation Increases Robustness to Inference Heuristics
Junghyun Min
R. Thomas McCoy
Dipanjan Das
Emily Pitler
Tal Linzen
44
177
0
24 Apr 2020
Syntactic Structure from Deep Learning
Tal Linzen
Marco Baroni
NAI
25
181
0
22 Apr 2020
Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection
Shauli Ravfogel
Yanai Elazar
Hila Gonen
Michael Twiton
Yoav Goldberg
54
374
0
16 Apr 2020
Unsupervised Domain Clusters in Pretrained Language Models
Roee Aharoni
Yoav Goldberg
48
246
0
05 Apr 2020
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
259
1,460
0
18 Mar 2020
A Primer in BERTology: What we know about how BERT works
Anna Rogers
Olga Kovaleva
Anna Rumshisky
OffRL
35
1,477
0
27 Feb 2020
Parsing as Pretraining
David Vilares
Michalina Strzyz
Anders Søgaard
Carlos Gómez-Rodríguez
51
31
0
05 Feb 2020
Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction
Taeuk Kim
Jihun Choi
Daniel Edmiston
Sang-goo Lee
27
90
0
30 Jan 2020
oLMpics -- On what Language Model Pre-training Captures
Alon Talmor
Yanai Elazar
Yoav Goldberg
Jonathan Berant
LRM
39
301
0
31 Dec 2019
Adversarial Analysis of Natural Language Inference Systems
Tiffany Chien
Jugal Kalita
AAML
44
12
0
07 Dec 2019
Neural Machine Translation: A Review and Survey
Felix Stahlberg
3DV
AI4TS
MedIm
47
314
0
04 Dec 2019
Inducing Relational Knowledge from BERT
Zied Bouraoui
Jose Camacho-Collados
Steven Schockaert
38
167
0
28 Nov 2019
Question Generation from Paragraphs: A Tale of Two Hierarchical Models
Vishwajeet Kumar
Raktim Chaki
Sai Teja Talluri
Kristine Newman
Yuan-Fang Li
Gholamreza Haffari
18
5
0
08 Nov 2019
On the Cross-lingual Transferability of Monolingual Representations
Mikel Artetxe
Sebastian Ruder
Dani Yogatama
36
786
0
25 Oct 2019
Universal Text Representation from BERT: An Empirical Study
Xiaofei Ma
Zhiguo Wang
Patrick Ng
Ramesh Nallapati
Bing Xiang
OOD
24
66
0
17 Oct 2019
Biomedical relation extraction with pre-trained language representations and minimal task-specific architecture
Ashok Thillaisundaram
Theodosia Togia
34
17
0
26 Sep 2019
Say Anything: Automatic Semantic Infelicity Detection in L2 English Indefinite Pronouns
Ella Rabinovich
Julia Watson
Barend Beekhuizen
S. Stevenson
30
3
0
17 Sep 2019
Probing Natural Language Inference Models through Semantic Fragments
Kyle Richardson
Hai Hu
L. Moss
Ashish Sabharwal
24
148
0
16 Sep 2019
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
472
2,618
0
03 Sep 2019
Compositionality decomposed: how do neural networks generalise?
Dieuwke Hupkes
Verna Dankers
Mathijs Mul
Elia Bruni
CoGe
44
326
0
22 Aug 2019
Deep Contextualized Word Embeddings in Transition-Based and Graph-Based Dependency Parsing -- A Tale of Two Parsers Revisited
Artur Kulmizev
Miryam de Lhoneux
Johannes Gontrum
Elena Fano
Joakim Nivre
43
56
0
20 Aug 2019
Visualizing and Understanding the Effectiveness of BERT
Y. Hao
Li Dong
Furu Wei
Ke Xu
31
183
0
15 Aug 2019
What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models
Allyson Ettinger
30
600
0
31 Jul 2019
Scalable Syntax-Aware Language Models Using Knowledge Distillation
A. Kuncoro
Chris Dyer
Laura Rimell
S. Clark
Phil Blunsom
40
26
0
14 Jun 2019
What Does BERT Look At? An Analysis of BERT's Attention
Kevin Clark
Urvashi Khandelwal
Omer Levy
Christopher D. Manning
MILM
159
1,591
0
11 Jun 2019
A Survey of Reinforcement Learning Informed by Natural Language
Jelena Luketina
Nantas Nardelli
Gregory Farquhar
Jakob N. Foerster
Jacob Andreas
Edward Grefenstette
Shimon Whiteson
Tim Rocktaschel
LM&Ro
KELM
OffRL
LRM
33
279
0
10 Jun 2019
Analyzing the Structure of Attention in a Transformer Language Model
Jesse Vig
Yonatan Belinkov
30
360
0
07 Jun 2019
Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)
Mariya Toneva
Leila Wehbe
MILM
AI4CE
47
223
0
28 May 2019
75 Languages, 1 Model: Parsing Universal Dependencies Universally
Dan Kondratyuk
Milan Straka
44
264
0
03 Apr 2019
Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
Raphael Tang
Yao Lu
Linqing Liu
Lili Mou
Olga Vechtomova
Jimmy J. Lin
32
418
0
28 Mar 2019
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference
R. Thomas McCoy
Ellie Pavlick
Tal Linzen
74
1,224
0
04 Feb 2019
Previous
1
2
3