ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.12246
  4. Cited By
Do Attention Heads in BERT Track Syntactic Dependencies?

Do Attention Heads in BERT Track Syntactic Dependencies?

27 November 2019
Phu Mon Htut
Jason Phang
Shikha Bordia
Samuel R. Bowman
ArXivPDFHTML

Papers citing "Do Attention Heads in BERT Track Syntactic Dependencies?"

33 / 33 papers shown
Title
Linguistically Grounded Analysis of Language Models using Shapley Head Values
Linguistically Grounded Analysis of Language Models using Shapley Head Values
Marcell Richard Fekete
Johannes Bjerva
31
0
0
17 Oct 2024
On the Role of Attention Heads in Large Language Model Safety
On the Role of Attention Heads in Large Language Model Safety
Zhenhong Zhou
Haiyang Yu
Xinghua Zhang
Rongwu Xu
Fei Huang
Kun Wang
Yang Liu
Fan Zhang
Yongbin Li
59
5
0
17 Oct 2024
Racing Thoughts: Explaining Contextualization Errors in Large Language Models
Racing Thoughts: Explaining Contextualization Errors in Large Language Models
Michael A. Lepori
Michael Mozer
Asma Ghandeharioun
LRM
85
1
0
02 Oct 2024
Concentrate Attention: Towards Domain-Generalizable Prompt Optimization
  for Language Models
Concentrate Attention: Towards Domain-Generalizable Prompt Optimization for Language Models
Chengzhengxu Li
Xiaoming Liu
Zhaohan Zhang
Yichen Wang
Chen Liu
Y. Lan
Chao Shen
60
2
0
15 Jun 2024
Inducing Systematicity in Transformers by Attending to Structurally
  Quantized Embeddings
Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings
Yichen Jiang
Xiang Zhou
Mohit Bansal
35
1
0
09 Feb 2024
Disentangling the Linguistic Competence of Privacy-Preserving BERT
Disentangling the Linguistic Competence of Privacy-Preserving BERT
Stefan Arnold
Nils Kemmerzell
Annika Schreiner
30
0
0
17 Oct 2023
Morphosyntactic probing of multilingual BERT models
Morphosyntactic probing of multilingual BERT models
Judit Ács
Endre Hamerlik
Roy Schwartz
Noah A. Smith
András Kornai
32
9
0
09 Jun 2023
Syntactic Substitutability as Unsupervised Dependency Syntax
Syntactic Substitutability as Unsupervised Dependency Syntax
Jasper Jian
Siva Reddy
27
3
0
29 Nov 2022
How Much Does Attention Actually Attend? Questioning the Importance of
  Attention in Pretrained Transformers
How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Michael Hassid
Hao Peng
Daniel Rotem
Jungo Kasai
Ivan Montero
Noah A. Smith
Roy Schwartz
32
24
0
07 Nov 2022
Data-Efficient Cross-Lingual Transfer with Language-Specific Subnetworks
Data-Efficient Cross-Lingual Transfer with Language-Specific Subnetworks
Rochelle Choenni
Dan Garrette
Ekaterina Shutova
24
2
0
31 Oct 2022
Same Pre-training Loss, Better Downstream: Implicit Bias Matters for
  Language Models
Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models
Hong Liu
Sang Michael Xie
Zhiyuan Li
Tengyu Ma
AI4CE
40
49
0
25 Oct 2022
Does Attention Mechanism Possess the Feature of Human Reading? A
  Perspective of Sentiment Classification Task
Does Attention Mechanism Possess the Feature of Human Reading? A Perspective of Sentiment Classification Task
Leilei Zhao
Yingyi Zhang
Chengzhi Zhang
32
2
0
08 Sep 2022
What does Transformer learn about source code?
What does Transformer learn about source code?
Kechi Zhang
Ge Li
Zhi Jin
ViT
22
8
0
18 Jul 2022
Revisiting Generative Commonsense Reasoning: A Pre-Ordering Approach
Revisiting Generative Commonsense Reasoning: A Pre-Ordering Approach
Chao Zhao
Faeze Brahman
Tenghao Huang
Snigdha Chaturvedi
LRM
19
3
0
26 May 2022
Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps
Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps
Oren Barkan
Edan Hauon
Avi Caciularu
Ori Katz
Itzik Malkiel
Omri Armstrong
Noam Koenigstein
34
37
0
23 Apr 2022
Probing Script Knowledge from Pre-Trained Models
Probing Script Knowledge from Pre-Trained Models
Zijian Jin
Xingyu Zhang
Mo Yu
Lifu Huang
18
4
0
16 Apr 2022
What Do They Capture? -- A Structural Analysis of Pre-Trained Language
  Models for Source Code
What Do They Capture? -- A Structural Analysis of Pre-Trained Language Models for Source Code
Yao Wan
Wei-Ye Zhao
Hongyu Zhang
Yulei Sui
Guandong Xu
Hairong Jin
35
105
0
14 Feb 2022
Interpreting Deep Learning Models in Natural Language Processing: A
  Review
Interpreting Deep Learning Models in Natural Language Processing: A Review
Xiaofei Sun
Diyi Yang
Xiaoya Li
Tianwei Zhang
Yuxian Meng
Han Qiu
Guoyin Wang
Eduard H. Hovy
Jiwei Li
17
44
0
20 Oct 2021
Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with
  Controllable Perturbations
Shaking Syntactic Trees on the Sesame Street: Multilingual Probing with Controllable Perturbations
Ekaterina Taktasheva
Vladislav Mikhailov
Ekaterina Artemova
24
13
0
28 Sep 2021
Incorporating Residual and Normalization Layers into Analysis of Masked
  Language Models
Incorporating Residual and Normalization Layers into Analysis of Masked Language Models
Goro Kobayashi
Tatsuki Kuribayashi
Sho Yokoi
Kentaro Inui
160
46
0
15 Sep 2021
Enjoy the Salience: Towards Better Transformer-based Faithful
  Explanations with Word Salience
Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience
G. Chrysostomou
Nikolaos Aletras
32
16
0
31 Aug 2021
Pre-Trained Models: Past, Present and Future
Pre-Trained Models: Past, Present and Future
Xu Han
Zhengyan Zhang
Ning Ding
Yuxian Gu
Xiao Liu
...
Jie Tang
Ji-Rong Wen
Jinhui Yuan
Wayne Xin Zhao
Jun Zhu
AIFin
MQ
AI4MH
40
815
0
14 Jun 2021
The Limitations of Limited Context for Constituency Parsing
The Limitations of Limited Context for Constituency Parsing
Yuchen Li
Andrej Risteski
26
4
0
03 Jun 2021
Probing for Bridging Inference in Transformer Language Models
Probing for Bridging Inference in Transformer Language Models
Onkar Pandit
Yufang Hou
50
14
0
19 Apr 2021
Enhancing Word-Level Semantic Representation via Dependency Structure
  for Expressive Text-to-Speech Synthesis
Enhancing Word-Level Semantic Representation via Dependency Structure for Expressive Text-to-Speech Synthesis
Yixuan Zhou
Changhe Song
Jingbei Li
Zhiyong Wu
Yanyao Bian
Dan Su
Helen Meng
41
6
0
14 Apr 2021
Probing Classifiers: Promises, Shortcomings, and Advances
Probing Classifiers: Promises, Shortcomings, and Advances
Yonatan Belinkov
226
405
0
24 Feb 2021
Gender Bias in Multilingual Neural Machine Translation: The Architecture
  Matters
Gender Bias in Multilingual Neural Machine Translation: The Architecture Matters
Marta R. Costa-jussá
Carlos Escolano
Christine Basta
Javier Ferrando
Roser Batlle-Roca
Ksenia Kharitonova
14
18
0
24 Dec 2020
Self-Explaining Structures Improve NLP Models
Self-Explaining Structures Improve NLP Models
Zijun Sun
Chun Fan
Qinghong Han
Xiaofei Sun
Yuxian Meng
Fei Wu
Jiwei Li
MILM
XAI
LRM
FAtt
39
38
0
03 Dec 2020
Encoding Syntactic Constituency Paths for Frame-Semantic Parsing with
  Graph Convolutional Networks
Encoding Syntactic Constituency Paths for Frame-Semantic Parsing with Graph Convolutional Networks
E. Bastianelli
Andrea Vanzo
Oliver Lemon
21
6
0
26 Nov 2020
BERTology Meets Biology: Interpreting Attention in Protein Language
  Models
BERTology Meets Biology: Interpreting Attention in Protein Language Models
Jesse Vig
Ali Madani
L. Varshney
Caiming Xiong
R. Socher
Nazneen Rajani
29
288
0
26 Jun 2020
Molecule Attention Transformer
Molecule Attention Transformer
Lukasz Maziarka
Tomasz Danel
Slawomir Mucha
Krzysztof Rataj
Jacek Tabor
Stanislaw Jastrzebski
19
167
0
19 Feb 2020
What you can cram into a single vector: Probing sentence embeddings for
  linguistic properties
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
201
882
0
03 May 2018
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1