Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2109.04404
Cited By
All Bark and No Bite: Rogue Dimensions in Transformer Language Models Obscure Representational Quality
9 September 2021
William Timkey
Marten van Schijndel
Re-assign community
ArXiv
PDF
HTML
Papers citing
"All Bark and No Bite: Rogue Dimensions in Transformer Language Models Obscure Representational Quality"
35 / 35 papers shown
Title
Do Large Language Models know who did what to whom?
Joseph M. Denning
Xiaohan
Bryor Snefjella
Idan A. Blank
182
1
0
23 Apr 2025
Outlier dimensions favor frequent tokens in language models
Iuri Macocco
Nora Graichen
Gemma Boleda
Marco Baroni
87
1
0
27 Mar 2025
ReSi: A Comprehensive Benchmark for Representational Similarity Measures
Max Klabunde
Tassilo Wald
Tobias Schumacher
Klaus H. Maier-Hein
Markus Strohmaier
Adriana Iamnitchi
AI4TS
VLM
195
6
0
13 Mar 2025
Implicit Geometry of Next-token Prediction: From Language Sparsity Patterns to Model Representations
Yize Zhao
Tina Behnia
V. Vakilian
Christos Thrampoulidis
150
10
0
20 Feb 2025
Geometric Signatures of Compositionality Across a Language Model's Lifetime
Jin Hwa Lee
Thomas Jiralerspong
Lei Yu
Yoshua Bengio
Emily Cheng
CoGe
123
3
0
02 Oct 2024
LEACE: Perfect linear concept erasure in closed form
Nora Belrose
David Schneider-Joseph
Shauli Ravfogel
Ryan Cotterell
Edward Raff
Stella Biderman
KELM
MU
64
113
0
06 Jun 2023
Similarity of Neural Network Models: A Survey of Functional and Representational Measures
Max Klabunde
Tobias Schumacher
M. Strohmaier
Florian Lemmerich
119
73
0
10 May 2023
The MultiBERTs: BERT Reproductions for Robustness Analysis
Thibault Sellam
Steve Yadlowsky
Jason W. Wei
Naomi Saphra
Alexander DÁmour
...
Iulia Turc
Jacob Eisenstein
Dipanjan Das
Ian Tenney
Ellie Pavlick
81
94
0
30 Jun 2021
BERT Busters: Outlier Dimensions that Disrupt Transformers
Olga Kovaleva
Saurabh Kulshreshtha
Anna Rogers
Anna Rumshisky
66
90
0
14 May 2021
Let's Play Mono-Poly: BERT Can Reveal Words' Polysemy Level and Partitionability into Senses
Aina Garí Soler
Marianna Apidianaki
MILM
225
69
0
29 Apr 2021
Positional Artefacts Propagate Through Masked Language Model Embeddings
Ziyang Luo
Artur Kulmizev
Xiaoxi Mao
66
41
0
09 Nov 2020
Analyzing the Source and Target Contributions to Predictions in Neural Machine Translation
Elena Voita
Rico Sennrich
Ivan Titov
48
86
0
21 Oct 2020
Probing Pretrained Language Models for Lexical Semantics
Ivan Vulić
Edoardo Ponti
Robert Litschko
Goran Glavaš
Anna Korhonen
KELM
66
244
0
12 Oct 2020
Discourse structure interacts with reference but not syntax in neural language models
Forrest Davis
Marten van Schijndel
36
20
0
10 Oct 2020
What Happens To BERT Embeddings During Fine-tuning?
Amil Merchant
Elahe Rahimtoroghi
Ellie Pavlick
Ian Tenney
65
187
0
29 Apr 2020
Leveraging Contextual Embeddings for Detecting Diachronic Semantic Shift
Matej Martinc
Petra Kralj Novak
Senja Pollak
27
71
0
02 Dec 2019
How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings
Kawin Ethayarajh
74
868
0
02 Sep 2019
Representation Degeneration Problem in Training Natural Language Generation Models
Jun Gao
Di He
Xu Tan
Tao Qin
Liwei Wang
Tie-Yan Liu
54
266
0
28 Jul 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
524
24,351
0
26 Jul 2019
XLNet: Generalized Autoregressive Pretraining for Language Understanding
Zhilin Yang
Zihang Dai
Yiming Yang
J. Carbonell
Ruslan Salakhutdinov
Quoc V. Le
AI4CE
220
8,415
0
19 Jun 2019
Visualizing and Measuring the Geometry of BERT
Andy Coenen
Emily Reif
Ann Yuan
Been Kim
Adam Pearce
F. Viégas
Martin Wattenberg
MILM
76
417
0
06 Jun 2019
Blackbox meets blackbox: Representational Similarity and Stability Analysis of Neural Language Models and Brains
Samira Abnar
Lisa Beinborn
Rochelle Choenni
Willem H. Zuidema
35
76
0
04 Jun 2019
Correlation Coefficients and Semantic Textual Similarity
V. Zhelezniak
Aleksandar Savkov
April Shen
Nils Y. Hammerla
58
50
0
19 May 2019
Correlating neural and symbolic representations of language
Grzegorz Chrupała
Afra Alishahi
NAI
32
72
0
14 May 2019
BERTScore: Evaluating Text Generation with BERT
Tianyi Zhang
Varsha Kishore
Felix Wu
Kilian Q. Weinberger
Yoav Artzi
281
5,764
0
21 Apr 2019
On Measuring Social Biases in Sentence Encoders
Chandler May
Alex Jinpeng Wang
Shikha Bordia
Samuel R. Bowman
Rachel Rudinger
90
599
0
25 Mar 2019
Linguistic Knowledge and Transferability of Contextual Representations
Nelson F. Liu
Matt Gardner
Yonatan Belinkov
Matthew E. Peters
Noah A. Smith
113
730
0
21 Mar 2019
Evaluating Word Embedding Models: Methods and Experimental Results
Bin Wang
Angela Wang
Fenxiao Chen
Yun Cheng Wang
C.-C. Jay Kuo
ELM
49
264
0
28 Jan 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.5K
94,511
0
11 Oct 2018
On the importance of single directions for generalization
Ari S. Morcos
David Barrett
Neil C. Rabinowitz
M. Botvinick
64
333
0
19 Mar 2018
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
640
130,942
0
12 Jun 2017
All-but-the-Top: Simple and Effective Postprocessing for Word Representations
Jiaqi Mu
S. Bhat
Pramod Viswanath
61
309
0
05 Feb 2017
SimVerb-3500: A Large-Scale Evaluation Set of Verb Similarity
D. Gerz
Ivan Vulić
Felix Hill
Roi Reichart
Anna Korhonen
58
263
0
02 Aug 2016
SimLex-999: Evaluating Semantic Models with (Genuine) Similarity Estimation
Felix Hill
Roi Reichart
Anna Korhonen
98
1,303
0
15 Aug 2014
Distributed Representations of Words and Phrases and their Compositionality
Tomas Mikolov
Ilya Sutskever
Kai Chen
G. Corrado
J. Dean
NAI
OCL
361
33,500
0
16 Oct 2013
1