ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.01229
  4. Cited By
Lower Perplexity is Not Always Human-Like

Lower Perplexity is Not Always Human-Like

2 June 2021
Tatsuki Kuribayashi
Yohei Oseki
Takumi Ito
Ryo Yoshida
Masayuki Asahara
Kentaro Inui
ArXivPDFHTML

Papers citing "Lower Perplexity is Not Always Human-Like"

19 / 19 papers shown
Title
Do We Truly Need So Many Samples? Multi-LLM Repeated Sampling Efficiently Scales Test-Time Compute
Do We Truly Need So Many Samples? Multi-LLM Repeated Sampling Efficiently Scales Test-Time Compute
Jianhao Chen
Zishuo Xun
Bocheng Zhou
Han Qi
Qiaosheng Zhang
...
Wei Hu
Yuzhong Qu
W. Ouyang
Wanli Ouyang
Shuyue Hu
74
0
0
01 Apr 2025
Large Language Models Are Human-Like Internally
Large Language Models Are Human-Like Internally
Tatsuki Kuribayashi
Yohei Oseki
Souhaib Ben Taieb
Kentaro Inui
Timothy Baldwin
71
4
0
03 Feb 2025
Scaling Diffusion Language Models via Adaptation from Autoregressive Models
Scaling Diffusion Language Models via Adaptation from Autoregressive Models
Shansan Gong
Shivam Agarwal
Yizhe Zhang
Jiacheng Ye
Lin Zheng
...
Peilin Zhao
W. Bi
Jiawei Han
Hao Peng
Lingpeng Kong
AI4CE
78
15
0
23 Oct 2024
Round and Round We Go! What makes Rotary Positional Encodings useful?
Round and Round We Go! What makes Rotary Positional Encodings useful?
Federico Barbero
Alex Vitvitskyi
Christos Perivolaropoulos
Razvan Pascanu
Petar Velickovic
83
16
0
08 Oct 2024
MetaMetrics: Calibrating Metrics For Generation Tasks Using Human Preferences
MetaMetrics: Calibrating Metrics For Generation Tasks Using Human Preferences
Genta Indra Winata
David Anugraha
Lucky Susanto
Garry Kuwanto
Derry Wijaya
37
7
0
03 Oct 2024
On the Role of Context in Reading Time Prediction
On the Role of Context in Reading Time Prediction
Andreas Opedal
Eleanor Chodroff
Ryan Cotterell
Ethan Gotlieb Wilcox
33
7
0
12 Sep 2024
Language models emulate certain cognitive profiles: An investigation of
  how predictability measures interact with individual differences
Language models emulate certain cognitive profiles: An investigation of how predictability measures interact with individual differences
Patrick Haller
Lena S. Bolliger
Lena Ann Jäger
39
1
0
07 Jun 2024
Filtered Corpus Training (FiCT) Shows that Language Models can
  Generalize from Indirect Evidence
Filtered Corpus Training (FiCT) Shows that Language Models can Generalize from Indirect Evidence
Abhinav Patil
Jaap Jumelet
Yu Ying Chiu
Andy Lapastora
Peter Shen
Lexie Wang
Clevis Willrich
Shane Steinert-Threlkeld
35
13
0
24 May 2024
Structural Priming Demonstrates Abstract Grammatical Representations in
  Multilingual Language Models
Structural Priming Demonstrates Abstract Grammatical Representations in Multilingual Language Models
J. Michaelov
Catherine Arnett
Tyler A. Chang
Benjamin Bergen
36
12
0
15 Nov 2023
Testing the Predictions of Surprisal Theory in 11 Languages
Testing the Predictions of Surprisal Theory in 11 Languages
Ethan Gotlieb Wilcox
Tiago Pimentel
Clara Meister
Ryan Cotterell
R. Levy
LRM
46
63
0
07 Jul 2023
Language Models are Drummers: Drum Composition with Natural Language
  Pre-Training
Language Models are Drummers: Drum Composition with Natural Language Pre-Training
Li Zhang
Chris Callison-Burch
26
5
0
03 Jan 2023
A Comparative Study on Textual Saliency of Styles from Eye Tracking,
  Annotations, and Language Models
A Comparative Study on Textual Saliency of Styles from Eye Tracking, Annotations, and Language Models
Karin de Langis
Dongyeop Kang
21
1
0
19 Dec 2022
On the Effect of Anticipation on Reading Times
On the Effect of Anticipation on Reading Times
Tiago Pimentel
Clara Meister
Ethan Gotlieb Wilcox
R. Levy
Ryan Cotterell
39
18
0
25 Nov 2022
Composition, Attention, or Both?
Composition, Attention, or Both?
Ryosuke Yoshida
Yohei Oseki
CoGe
29
0
0
24 Oct 2022
Deconstructing NLG Evaluation: Evaluation Practices, Assumptions, and
  Their Implications
Deconstructing NLG Evaluation: Evaluation Practices, Assumptions, and Their Implications
Kaitlyn Zhou
Su Lin Blodgett
Adam Trischler
Hal Daumé
Kaheer Suleman
Alexandra Olteanu
ELM
99
26
0
13 May 2022
Probing BERT's priors with serial reproduction chains
Probing BERT's priors with serial reproduction chains
Takateru Yamakoshi
Thomas L. Griffiths
Robert D. Hawkins
26
12
0
24 Feb 2022
Modeling Human Sentence Processing with Left-Corner Recurrent Neural
  Network Grammars
Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars
Ryo Yoshida
Hiroshi Noji
Yohei Oseki
34
8
0
10 Sep 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
290
1,815
0
14 Dec 2020
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
261
4,489
0
23 Jan 2020
1