Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.14840
Cited By
Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution
23 October 2023
Jaap Jumelet
Willem H. Zuidema
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution"
13 / 13 papers shown
Title
What Languages are Easy to Language-Model? A Perspective from Learning Probabilistic Regular Languages
Nadav Borenstein
Anej Svete
R. Chan
Josef Valvoda
Franz Nowak
Isabelle Augenstein
Eleanor Chodroff
Ryan Cotterell
60
13
0
06 Jun 2024
What Comes Next? Evaluating Uncertainty in Neural Text Generators Against Human Production Variability
Mario Giulianelli
Joris Baan
Wilker Aziz
Raquel Fernández
Barbara Plank
UQLM
54
31
0
19 May 2023
Examining the Inductive Bias of Neural Language Models with Artificial Languages
Jennifer C. White
Ryan Cotterell
65
44
0
02 Jun 2021
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
...
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
AIMat
450
2,093
0
31 Dec 2020
Evaluating Attribution Methods using White-Box LSTMs
Sophie Hao
FAtt
XAI
51
8
0
16 Oct 2020
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
Pengcheng He
Xiaodong Liu
Jianfeng Gao
Weizhu Chen
AAML
151
2,731
0
05 Jun 2020
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
602
4,801
0
23 Jan 2020
BLiMP: The Benchmark of Linguistic Minimal Pairs for English
Alex Warstadt
Alicia Parrish
Haokun Liu
Anhad Mohananey
Wei Peng
Sheng-Fu Wang
Samuel R. Bowman
72
491
0
02 Dec 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
633
24,431
0
26 Jul 2019
What Kind of Language Is Hard to Language-Model?
Sabrina J. Mielke
Ryan Cotterell
Kyle Gorman
Brian Roark
Jason Eisner
89
78
0
11 Jun 2019
BERT Rediscovers the Classical NLP Pipeline
Ian Tenney
Dipanjan Das
Ellie Pavlick
MILM
SSeg
135
1,471
0
15 May 2019
Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages
Shauli Ravfogel
Yoav Goldberg
Tal Linzen
63
70
0
15 Mar 2019
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
331
893
0
03 May 2018
1