Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1906.01958
Cited By
From Balustrades to Pierre Vinken: Looking for Syntax in Transformer Self-Attentions
5 June 2019
David Marecek
Rudolf Rosa
Re-assign community
ArXiv
PDF
HTML
Papers citing
"From Balustrades to Pierre Vinken: Looking for Syntax in Transformer Self-Attentions"
10 / 10 papers shown
Title
Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Rohan Pandey
Rulin Shao
Paul Pu Liang
Ruslan Salakhutdinov
Louis-Philippe Morency
29
12
0
20 Dec 2022
Emergent Linguistic Structures in Neural Networks are Fragile
Emanuele La Malfa
Matthew Wicker
Marta Kiatkowska
22
1
0
31 Oct 2022
Revisiting the Practical Effectiveness of Constituency Parse Extraction from Pre-trained Language Models
Taeuk Kim
37
1
0
15 Sep 2022
What does Transformer learn about source code?
Kechi Zhang
Ge Li
Zhi Jin
ViT
28
8
0
18 Jul 2022
Incorporating Residual and Normalization Layers into Analysis of Masked Language Models
Goro Kobayashi
Tatsuki Kuribayashi
Sho Yokoi
Kentaro Inui
160
46
0
15 Sep 2021
Predicting Discourse Trees from Transformer-based Neural Summarizers
Wen Xiao
Patrick Huber
Giuseppe Carenini
23
14
0
14 Apr 2021
Probing Classifiers: Promises, Shortcomings, and Advances
Yonatan Belinkov
226
405
0
24 Feb 2021
Gender Bias in Multilingual Neural Machine Translation: The Architecture Matters
Marta R. Costa-jussá
Carlos Escolano
Christine Basta
Javier Ferrando
Roser Batlle-Roca
Ksenia Kharitonova
22
18
0
24 Dec 2020
Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation
Alessandro Raganato
Yves Scherrer
Jörg Tiedemann
32
92
0
24 Feb 2020
Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction
Taeuk Kim
Jihun Choi
Daniel Edmiston
Sang-goo Lee
22
90
0
30 Jan 2020
1