Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.15222
Cited By
v1
v2
v3 (latest)
BERTology Meets Biology: Interpreting Attention in Protein Language Models
26 June 2020
Jesse Vig
Ali Madani
Lav Varshney
Caiming Xiong
R. Socher
Nazneen Rajani
Re-assign community
ArXiv (abs)
PDF
HTML
Github (302★)
Papers citing
"BERTology Meets Biology: Interpreting Attention in Protein Language Models"
50 / 62 papers shown
Title
ProteinGPT: Multimodal LLM for Protein Property Prediction and Structure Understanding
Yijia Xiao
Edward Sun
Yiqiao Jin
Qifan Wang
Wei Wang
83
15
0
21 Aug 2024
ProtChatGPT: Towards Understanding Proteins with Large Language Models
Chao Wang
Hehe Fan
Ruijie Quan
Yi Yang
79
15
0
15 Feb 2024
Comparative Performance Evaluation of Large Language Models for Extracting Molecular Interactions and Pathway Knowledge
Gilchan Park
Byung-Jun Yoon
Xihaier Luo
Vanessa López-Marrero
Shinjae Yoo
Francis J. Alexander
104
10
0
17 Jul 2023
Multi-Stage Influence Function
Hongge Chen
Si Si
Yongqian Li
Ciprian Chelba
Sanjiv Kumar
Duane S. Boning
Cho-Jui Hsieh
TDI
52
17
0
17 Jul 2020
ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing
Ahmed Elnaggar
M. Heinzinger
Christian Dallago
Ghalia Rehawi
Yu Wang
...
Tamas B. Fehér
Christoph Angerer
Martin Steinegger
D. Bhowmik
B. Rost
DRL
64
947
0
13 Jul 2020
BERT Learns (and Teaches) Chemistry
Josh Payne
Mario Srouji
Dian Ang Yap
V. Kosaraju
49
10
0
11 Jul 2020
WT5?! Training Text-to-Text Models to Explain their Predictions
Sharan Narang
Colin Raffel
Katherine Lee
Adam Roberts
Noah Fiedel
Karishma Malkan
64
201
0
30 Apr 2020
Calibration of Pre-trained Transformers
Shrey Desai
Greg Durrett
UQLM
294
300
0
17 Mar 2020
ProGen: Language Modeling for Protein Generation
Ali Madani
Bryan McCann
Nikhil Naik
N. Keskar
N. Anand
Raphael R. Eguchi
Po-Ssu Huang
R. Socher
66
283
0
08 Mar 2020
A Primer in BERTology: What we know about how BERT works
Anna Rogers
Olga Kovaleva
Anna Rumshisky
OffRL
87
1,497
0
27 Feb 2020
Do Attention Heads in BERT Track Syntactic Dependencies?
Phu Mon Htut
Jason Phang
Shikha Bordia
Samuel R. Bowman
75
137
0
27 Nov 2019
What do you mean, BERT? Assessing BERT as a Distributional Semantics Model
Timothee Mickus
Denis Paperno
Mathieu Constant
Kees van Deemter
58
46
0
13 Nov 2019
Assessing Social and Intersectional Biases in Contextualized Word Representations
Y. Tan
Elisa Celis
FaML
92
228
0
04 Nov 2019
A Game Theoretic Approach to Class-wise Selective Rationalization
Shiyu Chang
Yang Zhang
Mo Yu
Tommi Jaakkola
53
61
0
28 Oct 2019
exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformers Models
Benjamin Hoover
Hendrik Strobelt
Sebastian Gehrmann
35
86
0
11 Oct 2019
Interrogating the Explanatory Power of Attention in Neural Machine Translation
Pooya Moradi
Nishant Kambhatla
Anoop Sarkar
78
16
0
30 Sep 2019
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Zhenzhong Lan
Mingda Chen
Sebastian Goodman
Kevin Gimpel
Piyush Sharma
Radu Soricut
SSL
AIMat
371
6,463
0
26 Sep 2019
Attention Interpretability Across NLP Tasks
Shikhar Vashishth
Shyam Upadhyay
Gaurav Singh Tomar
Manaal Faruqui
XAI
MILM
93
176
0
24 Sep 2019
Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings
Gregor Wiedemann
Steffen Remus
Avi Chawla
Chris Biemann
68
176
0
23 Sep 2019
Learning to Deceive with Attention-Based Explanations
Danish Pruthi
Mansi Gupta
Bhuwan Dhingra
Graham Neubig
Zachary Chase Lipton
80
193
0
17 Sep 2019
Pretrained AI Models: Performativity, Mobility, and Change
Lav Varshney
N. Keskar
R. Socher
45
20
0
07 Sep 2019
How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings
Kawin Ethayarajh
86
875
0
02 Sep 2019
Adaptively Sparse Transformers
Gonçalo M. Correia
Vlad Niculae
André F. T. Martins
87
256
0
30 Aug 2019
Revealing the Dark Secrets of BERT
Olga Kovaleva
Alexey Romanov
Anna Rogers
Anna Rumshisky
38
554
0
21 Aug 2019
Fine-grained Sentiment Analysis with Faithful Attention
Ruiqi Zhong
Steven Shao
Kathleen McKeown
85
50
0
19 Aug 2019
Attention is not not Explanation
Sarah Wiegreffe
Yuval Pinter
XAI
AAML
FAtt
120
913
0
13 Aug 2019
On Identifiability in Transformers
Gino Brunner
Yang Liu
Damian Pascual
Oliver Richter
Massimiliano Ciaramita
Roger Wattenhofer
ViT
65
189
0
12 Aug 2019
What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models
Allyson Ettinger
86
607
0
31 Jul 2019
Probing Neural Network Comprehension of Natural Language Arguments
Timothy Niven
Hung-Yu kao
AAML
88
455
0
17 Jul 2019
XLNet: Generalized Autoregressive Pretraining for Language Understanding
Zhilin Yang
Zihang Dai
Yiming Yang
J. Carbonell
Ruslan Salakhutdinov
Quoc V. Le
AI4CE
232
8,444
0
19 Jun 2019
Evaluating Protein Transfer Learning with TAPE
Roshan Rao
Nicholas Bhattacharya
Neil Thomas
Yan Duan
Xi Chen
John F. Canny
Pieter Abbeel
Yun S. Song
SSL
94
803
0
19 Jun 2019
Measuring Bias in Contextualized Word Representations
Keita Kurita
Nidhi Vyas
Ayush Pareek
A. Black
Yulia Tsvetkov
106
451
0
18 Jun 2019
A Multiscale Visualization of Attention in the Transformer Model
Jesse Vig
ViT
79
582
0
12 Jun 2019
What Does BERT Look At? An Analysis of BERT's Attention
Kevin Clark
Urvashi Khandelwal
Omer Levy
Christopher D. Manning
MILM
218
1,601
0
11 Jun 2019
Is Attention Interpretable?
Sofia Serrano
Noah A. Smith
108
684
0
09 Jun 2019
Analyzing the Structure of Attention in a Transformer Language Model
Jesse Vig
Yonatan Belinkov
66
370
0
07 Jun 2019
Visualizing and Measuring the Geometry of BERT
Andy Coenen
Emily Reif
Ann Yuan
Been Kim
Adam Pearce
F. Viégas
Martin Wattenberg
MILM
78
418
0
06 Jun 2019
Explain Yourself! Leveraging Language Models for Commonsense Reasoning
Nazneen Rajani
Bryan McCann
Caiming Xiong
R. Socher
ReLM
LRM
82
566
0
06 Jun 2019
Open Sesame: Getting Inside BERT's Linguistic Knowledge
Yongjie Lin
Y. Tan
Robert Frank
55
287
0
04 Jun 2019
Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned
Elena Voita
David Talbot
F. Moiseev
Rico Sennrich
Ivan Titov
114
1,146
0
23 May 2019
Interpretable Neural Predictions with Differentiable Binary Variables
Jasmijn Bastings
Wilker Aziz
Ivan Titov
82
214
0
20 May 2019
BERT Rediscovers the Classical NLP Pipeline
Ian Tenney
Dipanjan Das
Ellie Pavlick
MILM
SSeg
138
1,476
0
15 May 2019
Linguistic Knowledge and Transferability of Contextual Representations
Nelson F. Liu
Matt Gardner
Yonatan Belinkov
Matthew E. Peters
Noah A. Smith
135
735
0
21 Mar 2019
Attention is not Explanation
Sarthak Jain
Byron C. Wallace
FAtt
145
1,327
0
26 Feb 2019
Learning protein sequence embeddings using information from structure
Tristan Bepler
Bonnie Berger
73
290
0
22 Feb 2019
ProteinNet: a standardized data set for machine learning of protein structure
Mohammed AlQuraishi
PINN
48
140
0
01 Feb 2019
Assessing BERT's Syntactic Abilities
Yoav Goldberg
73
496
0
16 Jan 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.8K
95,114
0
11 Oct 2018
Dissecting Contextual Word Embeddings: Architecture and Representation
Matthew E. Peters
Mark Neumann
Luke Zettlemoyer
Wen-tau Yih
99
431
0
27 Aug 2018
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
349
895
0
03 May 2018
1
2
Next