Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1901.05287
Cited By
Assessing BERT's Syntactic Abilities
16 January 2019
Yoav Goldberg
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Assessing BERT's Syntactic Abilities"
50 / 138 papers shown
Title
Causal Transformers Perform Below Chance on Recursive Nested Constructions, Unlike Humans
Yair Lakretz
T. Desbordes
Dieuwke Hupkes
S. Dehaene
239
11
0
14 Oct 2021
Co-training an Unsupervised Constituency Parser with Weak Supervision
Nickil Maveli
Shay B. Cohen
SSL
60
3
0
05 Oct 2021
Analysing the Effect of Masking Length Distribution of MLM: An Evaluation Framework and Case Study on Chinese MRC Datasets
Changchang Zeng
Shaobo Li
24
6
0
29 Sep 2021
Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with Controllable Perturbations
Ekaterina Taktasheva
Vladislav Mikhailov
Ekaterina Artemova
29
13
0
28 Sep 2021
Monolingual and Cross-Lingual Acceptability Judgments with the Italian CoLA corpus
Daniela Trotta
R. Guarasci
Elisa Leonardelli
Sara Tonelli
55
30
0
24 Sep 2021
Are Transformers a Modern Version of ELIZA? Observations on French Object Verb Agreement
Bingzhi Li
Guillaume Wisniewski
Benoît Crabbé
68
6
0
21 Sep 2021
A Relation-Oriented Clustering Method for Open Relation Extraction
Jun Zhao
Tao Gui
Qi Zhang
Yaqian Zhou
48
33
0
15 Sep 2021
Frequency Effects on Syntactic Rule Learning in Transformers
Jason W. Wei
Dan Garrette
Tal Linzen
Ellie Pavlick
94
63
0
14 Sep 2021
Transformers in the loop: Polarity in neural models of language
Lisa Bylinina
Alexey Tikhonov
43
0
0
08 Sep 2021
How Does Adversarial Fine-Tuning Benefit BERT?
J. Ebrahimi
Hao Yang
Wei Zhang
AAML
31
4
0
31 Aug 2021
Differentiable Subset Pruning of Transformer Heads
Jiaoda Li
Ryan Cotterell
Mrinmaya Sachan
77
54
0
10 Aug 2021
Improving Similar Language Translation With Transfer Learning
Ife Adebara
Muhammad Abdul-Mageed
34
1
0
07 Aug 2021
Local Structure Matters Most: Perturbation Study in NLU
Louis Clouâtre
Prasanna Parthasarathi
Amal Zouaq
Sarath Chandar
35
13
0
29 Jul 2021
QA Dataset Explosion: A Taxonomy of NLP Resources for Question Answering and Reading Comprehension
Anna Rogers
Matt Gardner
Isabelle Augenstein
41
163
0
27 Jul 2021
Did the Cat Drink the Coffee? Challenging Transformers with Generalized Event Knowledge
Paolo Pedinotti
Giulia Rambelli
Emmanuele Chersoni
Enrico Santus
Alessandro Lenci
P. Blache
27
27
0
22 Jul 2021
A Survey on Data Augmentation for Text Classification
Markus Bayer
M. Kaufhold
Christian A. Reuter
80
339
0
07 Jul 2021
On the proper role of linguistically-oriented deep net analysis in linguistic theorizing
Marco Baroni
26
51
0
16 Jun 2021
Pre-Trained Models: Past, Present and Future
Xu Han
Zhengyan Zhang
Ning Ding
Yuxian Gu
Xiao Liu
...
Jie Tang
Ji-Rong Wen
Jinhui Yuan
Wayne Xin Zhao
Jun Zhu
AIFin
MQ
AI4MH
65
825
0
14 Jun 2021
Can Transformer Language Models Predict Psychometric Properties?
Antonio Laverghetta
Animesh Nighojkar
Jamshidbek Mirzakhalov
John Licato
LM&MA
43
14
0
12 Jun 2021
BERTnesia: Investigating the capture and forgetting of knowledge in BERT
Jonas Wallat
Jaspreet Singh
Avishek Anand
CLL
KELM
70
58
0
05 Jun 2021
Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
Rowan Hall Maudslay
Ryan Cotterell
36
33
0
04 Jun 2021
The Limitations of Limited Context for Constituency Parsing
Yuchen Li
Andrej Risteski
31
5
0
03 Jun 2021
Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction
Shauli Ravfogel
Grusha Prasad
Tal Linzen
Yoav Goldberg
41
57
0
14 May 2021
Refining Targeted Syntactic Evaluation of Language Models
Benjamin Newman
Kai-Siang Ang
Julia Gong
John Hewitt
34
43
0
19 Apr 2021
Probing for Bridging Inference in Transformer Language Models
Onkar Pandit
Yufang Hou
53
14
0
19 Apr 2021
Back to Square One: Artifact Detection, Training and Commonsense Disentanglement in the Winograd Schema
Yanai Elazar
Hongming Zhang
Yoav Goldberg
Dan Roth
ReLM
LRM
50
44
0
16 Apr 2021
Syntactic Perturbations Reveal Representational Correlates of Hierarchical Phrase Structure in Pretrained Language Models
Matteo Alleman
J. Mamou
Miguel Rio
Hanlin Tang
Yoon Kim
SueYeon Chung
NAI
51
17
0
15 Apr 2021
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little
Koustuv Sinha
Robin Jia
Dieuwke Hupkes
J. Pineau
Adina Williams
Douwe Kiela
50
246
0
14 Apr 2021
Large Pre-trained Language Models Contain Human-like Biases of What is Right and Wrong to Do
P. Schramowski
Cigdem Turan
Nico Andersen
Constantin Rothkopf
Kristian Kersting
37
285
0
08 Mar 2021
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
275
353
0
01 Feb 2021
Does injecting linguistic structure into language models lead to better alignment with brain recordings?
Mostafa Abdou
Ana Valeria González
Mariya Toneva
Daniel Hershcovich
Anders Søgaard
27
15
0
29 Jan 2021
The heads hypothesis: A unifying statistical approach towards understanding multi-headed attention in BERT
Madhura Pande
Aakriti Budhraja
Preksha Nema
Pratyush Kumar
Mitesh M. Khapra
49
19
0
22 Jan 2021
Can RNNs learn Recursive Nested Subject-Verb Agreements?
Yair Lakretz
T. Desbordes
J. King
Benoît Crabbé
Maxime Oquab
S. Dehaene
160
19
0
06 Jan 2021
Causal BERT : Language models for causality detection between events expressed in text
Vivek Khetan
Roshni Ramnani
M. Anand
Shubhashis Sengupta
Andrew E.Fano
24
44
0
10 Dec 2020
Infusing Finetuning with Semantic Dependencies
Zhaofeng Wu
Hao Peng
Noah A. Smith
30
37
0
10 Dec 2020
The Devil is in the Details: Evaluating Limitations of Transformer-based Methods for Granular Tasks
Brihi Joshi
Neil Shah
Francesco Barbieri
Leonardo Neves
39
5
0
02 Nov 2020
Dynamic Contextualized Word Embeddings
Valentin Hofmann
J. Pierrehumbert
Hinrich Schütze
48
52
0
23 Oct 2020
Cold-start Active Learning through Self-supervised Language Modeling
Michelle Yuan
Hsuan-Tien Lin
Jordan L. Boyd-Graber
127
181
0
19 Oct 2020
Does Chinese BERT Encode Word Structure?
Yile Wang
Leyang Cui
Yue Zhang
46
6
0
15 Oct 2020
Linguistic Profiling of a Neural Language Model
Alessio Miaschi
D. Brunato
F. Dell’Orletta
Giulia Venturi
41
47
0
05 Oct 2020
My Body is a Cage: the Role of Morphology in Graph-Based Incompatible Control
Vitaly Kurin
Maximilian Igl
Tim Rocktaschel
Wendelin Boehmer
Shimon Whiteson
AI4CE
37
88
0
05 Oct 2020
Improving AMR Parsing with Sequence-to-Sequence Pre-training
Dong Xu
Junhui Li
Muhua Zhu
Min Zhang
Guodong Zhou
AIMat
20
68
0
05 Oct 2020
Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking
Michael Schlichtkrull
Nicola De Cao
Ivan Titov
AI4CE
41
214
0
01 Oct 2020
Analysis and Evaluation of Language Models for Word Sense Disambiguation
Daniel Loureiro
Kiamehr Rezaee
Mohammad Taher Pilehvar
Jose Camacho-Collados
38
14
0
26 Aug 2020
BERTology Meets Biology: Interpreting Attention in Protein Language Models
Jesse Vig
Ali Madani
Lav Varshney
Caiming Xiong
R. Socher
Nazneen Rajani
34
289
0
26 Jun 2020
A Cross-Task Analysis of Text Span Representations
Shubham Toshniwal
Freda Shi
Bowen Shi
Lingyu Gao
Karen Livescu
Kevin Gimpel
53
36
0
06 Jun 2020
Syntactic Structure Distillation Pretraining For Bidirectional Encoders
A. Kuncoro
Lingpeng Kong
Daniel Fried
Dani Yogatama
Laura Rimell
Chris Dyer
Phil Blunsom
36
33
0
27 May 2020
Query Resolution for Conversational Search with Limited Supervision
Nikos Voskarides
Dan Li
Pengjie Ren
Evangelos Kanoulas
Maarten de Rijke
35
123
0
24 May 2020
A Generative Approach to Titling and Clustering Wikipedia Sections
Anjalie Field
S. Rothe
Simon Baumgartner
Cong Yu
Abe Ittycheriah
55
4
0
22 May 2020
TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data
Pengcheng Yin
Graham Neubig
Wen-tau Yih
Sebastian Riedel
RALM
LMTD
57
587
0
17 May 2020
Previous
1
2
3
Next