Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.05950
Cited By
BERT Rediscovers the Classical NLP Pipeline
15 May 2019
Ian Tenney
Dipanjan Das
Ellie Pavlick
MILM
SSeg
Re-assign community
ArXiv
PDF
HTML
Papers citing
"BERT Rediscovers the Classical NLP Pipeline"
50 / 296 papers shown
Title
COPEN: Probing Conceptual Knowledge in Pre-trained Language Models
Hao Peng
Xiaozhi Wang
Shengding Hu
Hailong Jin
Lei Hou
Juanzi Li
Zhiyuan Liu
Qun Liu
18
22
0
08 Nov 2022
Logographic Information Aids Learning Better Representations for Natural Language Inference
Zijian Jin
Duygu Ataman
28
0
0
03 Nov 2022
A Law of Data Separation in Deep Learning
Hangfeng He
Weijie J. Su
OOD
24
37
0
31 Oct 2022
Controlled Text Reduction
Aviv Slobodkin
Paul Roit
Eran Hirsch
Ori Ernst
Ido Dagan
47
10
0
24 Oct 2022
Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs
Maarten Sap
Ronan Le Bras
Daniel Fried
Yejin Choi
27
209
0
24 Oct 2022
Structural generalization is hard for sequence-to-sequence models
Yuekun Yao
Alexander Koller
30
21
0
24 Oct 2022
On the Transformation of Latent Space in Fine-Tuned NLP Models
Nadir Durrani
Hassan Sajjad
Fahim Dalvi
Firoj Alam
32
18
0
23 Oct 2022
Probing with Noise: Unpicking the Warp and Weft of Embeddings
Filip Klubicka
John D. Kelleher
30
4
0
21 Oct 2022
Spectral Probing
Max Müller-Eberstein
Rob van der Goot
Barbara Plank
6
2
0
21 Oct 2022
SLING: Sino Linguistic Evaluation of Large Language Models
Yixiao Song
Kalpesh Krishna
R. Bhatt
Mohit Iyyer
24
8
0
21 Oct 2022
Enhancing Out-of-Distribution Detection in Natural Language Understanding via Implicit Layer Ensemble
Hyunsoo Cho
Choonghyun Park
Jaewoo Kang
Kang Min Yoo
Taeuk Kim
Sang-goo Lee
OODD
30
8
0
20 Oct 2022
Transformers Learn Shortcuts to Automata
Bingbin Liu
Jordan T. Ash
Surbhi Goel
A. Krishnamurthy
Cyril Zhang
OffRL
LRM
46
156
0
19 Oct 2022
Hidden State Variability of Pretrained Language Models Can Guide Computation Reduction for Transfer Learning
Shuo Xie
Jiahao Qiu
Ankita Pasad
Li Du
Qing Qu
Hongyuan Mei
35
16
0
18 Oct 2022
On the Explainability of Natural Language Processing Deep Models
Julia El Zini
M. Awad
29
82
0
13 Oct 2022
Downstream Datasets Make Surprisingly Good Pretraining Corpora
Kundan Krishna
Saurabh Garg
Jeffrey P. Bigham
Zachary Chase Lipton
48
30
0
28 Sep 2022
Causal Proxy Models for Concept-Based Model Explanations
Zhengxuan Wu
Karel DÓosterlinck
Atticus Geiger
Amir Zur
Christopher Potts
MILM
83
35
0
28 Sep 2022
Fast-FNet: Accelerating Transformer Encoder Models via Efficient Fourier Layers
Nurullah Sevim
Ege Ozan Özyedek
Furkan Şahinuç
Aykut Koç
35
11
0
26 Sep 2022
Revisiting the Practical Effectiveness of Constituency Parse Extraction from Pre-trained Language Models
Taeuk Kim
37
1
0
15 Sep 2022
Analyzing Transformers in Embedding Space
Guy Dar
Mor Geva
Ankit Gupta
Jonathan Berant
29
83
0
06 Sep 2022
Why Do Neural Language Models Still Need Commonsense Knowledge to Handle Semantic Variations in Question Answering?
Sunjae Kwon
Cheongwoong Kang
Jiyeon Han
Jaesik Choi
29
0
0
01 Sep 2022
Interpreting Embedding Spaces by Conceptualization
Adi Simhi
Shaul Markovitch
24
5
0
22 Aug 2022
A Syntax Aware BERT for Identifying Well-Formed Queries in a Curriculum Framework
Avinash Madasu
Anvesh Rao Vijjini
22
0
0
21 Aug 2022
An Interpretability Evaluation Benchmark for Pre-trained Language Models
Ya-Ming Shen
Lijie Wang
Ying-Cong Chen
Xinyan Xiao
Jing Liu
Hua Wu
37
4
0
28 Jul 2022
BOSS: Bottom-up Cross-modal Semantic Composition with Hybrid Counterfactual Training for Robust Content-based Image Retrieval
Wenqiao Zhang
Jiannan Guo
Meng Li
Haochen Shi
Shengyu Zhang
Juncheng Li
Siliang Tang
Yueting Zhuang
55
6
0
09 Jul 2022
Probing via Prompting
Jiaoda Li
Ryan Cotterell
Mrinmaya Sachan
37
13
0
04 Jul 2022
VL-CheckList: Evaluating Pre-trained Vision-Language Models with Objects, Attributes and Relations
Tiancheng Zhao
Tianqi Zhang
Mingwei Zhu
Haozhan Shen
Kyusong Lee
Xiaopeng Lu
Jianwei Yin
VLM
CoGe
MLLM
45
91
0
01 Jul 2022
Towards Unsupervised Content Disentanglement in Sentence Representations via Syntactic Roles
G. Felhi
Joseph Le Roux
Djamé Seddah
DRL
19
5
0
22 Jun 2022
Evaluating Self-Supervised Learning for Molecular Graph Embeddings
Hanchen Wang
Jean Kaddour
Shengchao Liu
Jian Tang
Joan Lasenby
Qi Liu
30
20
0
16 Jun 2022
Transition-based Abstract Meaning Representation Parsing with Contextual Embeddings
Yi Liang
63
0
0
13 Jun 2022
ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers
Z. Yao
Reza Yazdani Aminabadi
Minjia Zhang
Xiaoxia Wu
Conglong Li
Yuxiong He
VLM
MQ
71
444
0
04 Jun 2022
On Building Spoken Language Understanding Systems for Low Resourced Languages
Akshat Gupta
25
8
0
25 May 2022
What Drives the Use of Metaphorical Language? Negative Insights from Abstractness, Affect, Discourse Coherence and Contextualized Word Representations
P. Piccirilli
Sabine Schulte im Walde
22
4
0
23 May 2022
The Geometry of Multilingual Language Model Representations
Tyler A. Chang
Z. Tu
Benjamin Bergen
21
56
0
22 May 2022
Life after BERT: What do Other Muppets Understand about Language?
Vladislav Lialin
Kevin Zhao
Namrata Shivagunde
Anna Rumshisky
47
6
0
21 May 2022
Assessing the Limits of the Distributional Hypothesis in Semantic Spaces: Trait-based Relational Knowledge and the Impact of Co-occurrences
Mark Anderson
Jose Camacho-Collados
35
0
0
16 May 2022
Discovering Latent Concepts Learned in BERT
Fahim Dalvi
A. Khan
Firoj Alam
Nadir Durrani
Jia Xu
Hassan Sajjad
SSL
11
56
0
15 May 2022
Exploiting Inductive Bias in Transformers for Unsupervised Disentanglement of Syntax and Semantics with VAEs
G. Felhi
Joseph Le Roux
Djamé Seddah
DRL
26
2
0
12 May 2022
When a sentence does not introduce a discourse entity, Transformer-based models still sometimes refer to it
Sebastian Schuster
Tal Linzen
13
25
0
06 May 2022
Adaptable Adapters
N. Moosavi
Quentin Delfosse
Kristian Kersting
Iryna Gurevych
53
21
0
03 May 2022
AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks
Chin-Lun Fu
Zih-Ching Chen
Yun-Ru Lee
Hung-yi Lee
33
44
0
30 Apr 2022
UniTE: Unified Translation Evaluation
Boyi Deng
Dayiheng Liu
Baosong Yang
Haibo Zhang
Boxing Chen
Derek F. Wong
Lidia S. Chao
38
41
0
28 Apr 2022
LyS_ACoruña at SemEval-2022 Task 10: Repurposing Off-the-Shelf Tools for Sentiment Analysis as Semantic Dependency Parsing
I. Alonso-Alonso
David Vilares
Carlos Gómez-Rodríguez
33
1
0
27 Apr 2022
Mono vs Multilingual BERT for Hate Speech Detection and Text Classification: A Case Study in Marathi
Abhishek Velankar
H. Patil
Raviraj Joshi
36
31
0
19 Apr 2022
CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge Distillation
Md. Akmal Haidar
Mehdi Rezagholizadeh
Abbas Ghaddar
Khalil Bibi
Philippe Langlais
Pascal Poupart
CLL
33
6
0
15 Apr 2022
Text Revision by On-the-Fly Representation Optimization
Jingjing Li
Zichao Li
Tao Ge
Irwin King
M. Lyu
BDL
31
17
0
15 Apr 2022
An Exploratory Study on Code Attention in BERT
Rishab Sharma
Fuxiang Chen
Fatemeh H. Fard
David Lo
27
25
0
05 Apr 2022
Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language
Andy Zeng
Maria Attarian
Brian Ichter
K. Choromanski
Adrian S. Wong
...
Michael S. Ryoo
Vikas Sindhwani
Johnny Lee
Vincent Vanhoucke
Peter R. Florence
ReLM
LRM
45
573
0
01 Apr 2022
Effect and Analysis of Large-scale Language Model Rescoring on Competitive ASR Systems
Takuma Udagawa
Masayuki Suzuki
Gakuto Kurata
N. Itoh
G. Saon
42
23
0
01 Apr 2022
Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space
Mor Geva
Avi Caciularu
Ke Wang
Yoav Goldberg
KELM
69
336
0
28 Mar 2022
Metaphors in Pre-Trained Language Models: Probing and Generalization Across Datasets and Languages
Ehsan Aghazadeh
Mohsen Fayyaz
Yadollah Yaghoobzadeh
33
51
0
26 Mar 2022
Previous
1
2
3
4
5
6
Next