ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.09479
  4. Cited By
Inverse Scaling: When Bigger Isn't Better

Inverse Scaling: When Bigger Isn't Better

15 June 2023
I. R. McKenzie
Alexander Lyzhov
Michael Pieler
Alicia Parrish
Aaron Mueller
Ameya Prabhu
Euan McLean
Aaron Kirtland
Alexis Ross
Alisa Liu
Andrew Gritsevskiy
Daniel Wurgaft
Derik Kauffman
Gabriel Recchia
Jiacheng Liu
Joe Cavanagh
Max Weiss
Sicong Huang
The Floating Droid
Tom Tseng
Tomasz Korbak
Xudong Shen
Yuhui Zhang
Zhengping Zhou
Najoung Kim
Sam Bowman
Ethan Perez
ArXivPDFHTML

Papers citing "Inverse Scaling: When Bigger Isn't Better"

50 / 106 papers shown
Title
Empirically evaluating commonsense intelligence in large language models with large-scale human judgments
Empirically evaluating commonsense intelligence in large language models with large-scale human judgments
Tuan Dung Nguyen
Duncan J. Watts
Mark E. Whiting
ELM
26
0
0
15 May 2025
Toward the Axiomatization of Intelligence: Structure, Time, and Existence
Toward the Axiomatization of Intelligence: Structure, Time, and Existence
Kei Itoh
16
0
0
20 Apr 2025
Evaluation Under Imperfect Benchmarks and Ratings: A Case Study in Text Simplification
Evaluation Under Imperfect Benchmarks and Ratings: A Case Study in Text Simplification
Joseph Liu
Yoonsoo Nam
Xinyue Cui
Swabha Swayamdipta
56
0
0
13 Apr 2025
On Model and Data Scaling for Skeleton-based Self-Supervised Gait Recognition
On Model and Data Scaling for Skeleton-based Self-Supervised Gait Recognition
Adrian Cosma
Andy Catruna
Emilian Radoi
38
0
0
10 Apr 2025
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Alex Warstadt
Aaron Mueller
Leshem Choshen
E. Wilcox
Chengxu Zhuang
...
Rafael Mosquera
Bhargavi Paranjape
Adina Williams
Tal Linzen
Ryan Cotterell
43
108
0
10 Apr 2025
A Survey of Scaling in Large Language Model Reasoning
A Survey of Scaling in Large Language Model Reasoning
Zihan Chen
Song Wang
Zhen Tan
Xingbo Fu
Zhenyu Lei
Peng Wang
Huan Liu
Cong Shen
Jundong Li
LRM
88
0
0
02 Apr 2025
Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and Mitigation
Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and Mitigation
Hongcheng Gao
Jiashu Qu
Jingyi Tang
Baolong Bi
Yi Liu
Hongyu Chen
Li Liang
Li Su
Qingming Huang
MLLM
VLM
LRM
85
5
0
25 Mar 2025
Generative Linguistics, Large Language Models, and the Social Nature of Scientific Success
Generative Linguistics, Large Language Models, and the Social Nature of Scientific Success
Sophie Hao
ELM
AI4CE
54
0
0
25 Mar 2025
Monitoring Reasoning Models for Misbehavior and the Risks of Promoting Obfuscation
Monitoring Reasoning Models for Misbehavior and the Risks of Promoting Obfuscation
Bowen Baker
Joost Huizinga
Leo Gao
Zehao Dou
M. Guan
Aleksander Mądry
Wojciech Zaremba
J. Pachocki
David Farhi
LRM
77
11
0
14 Mar 2025
Research on Superalignment Should Advance Now with Parallel Optimization of Competence and Conformity
HyunJin Kim
Xiaoyuan Yi
Jing Yao
Muhua Huang
Jinyeong Bak
James Evans
Xing Xie
44
0
0
08 Mar 2025
Not-Just-Scaling Laws: Towards a Better Understanding of the Downstream Impact of Language Model Design Decisions
E. Liu
Amanda Bertsch
Lintang Sutawika
Lindia Tjuatja
Patrick Fernandes
...
Shri Kiran Srinivasan
Carolin (Haas) Lawrence
Aditi Raghunathan
Kiril Gashteovski
Graham Neubig
90
0
0
05 Mar 2025
The MASK Benchmark: Disentangling Honesty From Accuracy in AI Systems
The MASK Benchmark: Disentangling Honesty From Accuracy in AI Systems
Richard Ren
Arunim Agarwal
Mantas Mazeika
Cristina Menghini
Robert Vacareanu
...
Matias Geralnik
Adam Khoja
Dean Lee
Summer Yue
Dan Hendrycks
HILM
ALM
90
0
0
05 Mar 2025
BIG-Bench Extra Hard
BIG-Bench Extra Hard
Mehran Kazemi
Bahare Fatemi
Hritik Bansal
John Palowitch
Chrysovalantis Anastasiou
...
Kate Olszewska
Yi Tay
Vinh Q. Tran
Quoc V. Le
Orhan Firat
ELM
LRM
122
6
0
26 Feb 2025
Provocations from the Humanities for Generative AI Research
Provocations from the Humanities for Generative AI Research
Lauren F. Klein
Meredith Martin
André Brock
Maria Antoniak
Melanie Walsh
Jessica Marie Johnson
Lauren Tilton
David M. Mimno
VLM
71
1
0
26 Feb 2025
Scaling Laws for Downstream Task Performance in Machine Translation
Scaling Laws for Downstream Task Performance in Machine Translation
Berivan Isik
Natalia Ponomareva
Hussein Hazimeh
Dimitris Paparas
Sergei Vassilvitskii
Sanmi Koyejo
113
4
0
24 Feb 2025
Scaling Trends in Language Model Robustness
Scaling Trends in Language Model Robustness
Nikolhaus Howe
Michal Zajac
I. R. McKenzie
Oskar Hollinsworth
Tom Tseng
Aaron David Tucker
Pierre-Luc Bacon
Adam Gleave
117
2
0
21 Feb 2025
Style Outweighs Substance: Failure Modes of LLM Judges in Alignment Benchmarking
Style Outweighs Substance: Failure Modes of LLM Judges in Alignment Benchmarking
Benjamin Feuer
Micah Goldblum
Teresa Datta
Sanjana Nambiar
Raz Besaleli
Samuel Dooley
Max Cembalest
John P. Dickerson
ALM
42
7
0
28 Jan 2025
Foundations of GenIR
Qingyao Ai
Jingtao Zhan
Yong-Jin Liu
51
0
0
06 Jan 2025
Security Attacks on LLM-based Code Completion Tools
Security Attacks on LLM-based Code Completion Tools
Wen Cheng
Ke Sun
Xinyu Zhang
Wei Wang
SILM
AAML
ELM
65
4
0
03 Jan 2025
The Task Shield: Enforcing Task Alignment to Defend Against Indirect
  Prompt Injection in LLM Agents
The Task Shield: Enforcing Task Alignment to Defend Against Indirect Prompt Injection in LLM Agents
Feiran Jia
Tong Wu
Xin Qin
Anna Squicciarini
LLMAG
AAML
89
4
0
21 Dec 2024
VLN-Game: Vision-Language Equilibrium Search for Zero-Shot Semantic Navigation
Bangguo Yu
Yuzhen Liu
Lei Han
H. Kasaei
Tingguang Li
M. Cao
LM&Ro
72
3
0
18 Nov 2024
GRADE: Quantifying Sample Diversity in Text-to-Image Models
GRADE: Quantifying Sample Diversity in Text-to-Image Models
Royi Rassin
Aviv Slobodkin
Shauli Ravfogel
Yanai Elazar
Yoav Goldberg
109
1
0
29 Oct 2024
Do Large Language Models Align with Core Mental Health Counseling Competencies?
Do Large Language Models Align with Core Mental Health Counseling Competencies?
Viet Cuong Nguyen
Mohammad Taher
Dongwan Hong
Vinicius Konkolics Possobom
Vibha Thirunellayi Gopalakrishnan
...
Zihang Li
H. J. Soled
Michael L. Birnbaum
Srijan Kumar
M. D. Choudhury
ELM
LM&MA
AI4MH
39
3
0
29 Oct 2024
Semantic Image Inversion and Editing using Rectified Stochastic
  Differential Equations
Semantic Image Inversion and Editing using Rectified Stochastic Differential Equations
Litu Rout
Yujia Chen
Nataniel Ruiz
C. Caramanis
Sanjay Shakkottai
Wen-Sheng Chu
DiffM
64
0
0
14 Oct 2024
Let's Ask GNN: Empowering Large Language Model for Graph In-Context Learning
Let's Ask GNN: Empowering Large Language Model for Graph In-Context Learning
Zhengyu Hu
Yichuan Li
Zhengyu Chen
Jiadong Wang
Han Liu
Kyumin Lee
Kaize Ding
GNN
205
1
0
09 Oct 2024
You Know What I'm Saying: Jailbreak Attack via Implicit Reference
You Know What I'm Saying: Jailbreak Attack via Implicit Reference
Tianyu Wu
Lingrui Mei
Ruibin Yuan
Lujun Li
Wei Xue
Yike Guo
48
1
0
04 Oct 2024
U-shaped and Inverted-U Scaling behind Emergent Abilities of Large Language Models
U-shaped and Inverted-U Scaling behind Emergent Abilities of Large Language Models
Tung-Yu Wu
Pei-Yu Lo
ReLM
LRM
46
2
0
02 Oct 2024
Truth or Deceit? A Bayesian Decoding Game Enhances Consistency and
  Reliability
Truth or Deceit? A Bayesian Decoding Game Enhances Consistency and Reliability
Weitong Zhang
Chengqi Zang
Bernhard Kainz
31
0
0
01 Oct 2024
On the Relationship between Truth and Political Bias in Language Models
On the Relationship between Truth and Political Bias in Language Models
S. Fulay
William Brannon
Shrestha Mohanty
Cassandra Overney
Elinor Poole-Dayan
Deb Roy
Jad Kabbara
HILM
26
2
0
09 Sep 2024
Focused Large Language Models are Stable Many-Shot Learners
Focused Large Language Models are Stable Many-Shot Learners
Peiwen Yuan
Shaoxiong Feng
Yiwei Li
Xinglin Wang
Y. Zhang
Chuyi Tan
Boyuan Pan
Heda Wang
Yao Hu
Kan Li
65
4
0
26 Aug 2024
Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?
Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?
Richard Ren
Steven Basart
Adam Khoja
Alice Gatti
Long Phan
...
Alexander Pan
Gabriel Mukobi
Ryan H. Kim
Stephen Fitz
Dan Hendrycks
ELM
26
21
0
31 Jul 2024
SaulLM-54B & SaulLM-141B: Scaling Up Domain Adaptation for the Legal
  Domain
SaulLM-54B & SaulLM-141B: Scaling Up Domain Adaptation for the Legal Domain
Pierre Colombo
T. Pires
Malik Boudiaf
Rui Melo
Dominic Culver
Sofia Morgado
Etienne Malaboeuf
Gabriel Hautreux
Johanne Charpentier
Michael Desa
ELM
AILaw
ALM
43
12
0
28 Jul 2024
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
40
10
0
27 Jul 2024
Benchmarks as Microscopes: A Call for Model Metrology
Benchmarks as Microscopes: A Call for Model Metrology
Michael Stephen Saxon
Ari Holtzman
Peter West
William Y. Wang
Naomi Saphra
39
10
0
22 Jul 2024
CLAVE: An Adaptive Framework for Evaluating Values of LLM Generated
  Responses
CLAVE: An Adaptive Framework for Evaluating Values of LLM Generated Responses
Jing Yao
Xiaoyuan Yi
Xing Xie
ELM
ALM
38
7
0
15 Jul 2024
FRoG: Evaluating Fuzzy Reasoning of Generalized Quantifiers in Large
  Language Models
FRoG: Evaluating Fuzzy Reasoning of Generalized Quantifiers in Large Language Models
Yiyuan Li
Shichao Sun
Pengfei Liu
LRM
59
0
0
01 Jul 2024
FastMem: Fast Memorization of Prompt Improves Context Awareness of Large
  Language Models
FastMem: Fast Memorization of Prompt Improves Context Awareness of Large Language Models
Junyi Zhu
Shuochen Liu
Yu Yu
Bo Tang
Yibo Yan
Zhiyu Li
Zhiyu Li
Tong Xu
Matthew B. Blaschko
47
3
0
23 Jun 2024
Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing
Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing
Han Jiang
Xiaoyuan Yi
Zhihua Wei
Shu Wang
Xing Xie
Xing Xie
ALM
ELM
52
5
0
20 Jun 2024
AgentDojo: A Dynamic Environment to Evaluate Attacks and Defenses for
  LLM Agents
AgentDojo: A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents
Edoardo Debenedetti
Jie Zhang
Mislav Balunović
Luca Beurer-Kellner
Marc Fischer
Florian Tramèr
LLMAG
AAML
59
27
1
19 Jun 2024
[WIP] Jailbreak Paradox: The Achilles' Heel of LLMs
[WIP] Jailbreak Paradox: The Achilles' Heel of LLMs
Abhinav Rao
Monojit Choudhury
Somak Aditya
24
0
0
18 Jun 2024
LLMs Are Prone to Fallacies in Causal Inference
LLMs Are Prone to Fallacies in Causal Inference
Nitish Joshi
Abulhair Saparov
Yixin Wang
He He
45
9
0
18 Jun 2024
$\texttt{MoE-RBench}$: Towards Building Reliable Language Models with
  Sparse Mixture-of-Experts
MoE-RBench\texttt{MoE-RBench}MoE-RBench: Towards Building Reliable Language Models with Sparse Mixture-of-Experts
Guanjie Chen
Xinyu Zhao
Tianlong Chen
Yu Cheng
MoE
76
5
0
17 Jun 2024
Assessing the Emergent Symbolic Reasoning Abilities of Llama Large
  Language Models
Assessing the Emergent Symbolic Reasoning Abilities of Llama Large Language Models
Flavio Petruzzellis
Alberto Testolin
A. Sperduti
ReLM
LRM
50
3
0
05 Jun 2024
Self-Control of LLM Behaviors by Compressing Suffix Gradient into Prefix
  Controller
Self-Control of LLM Behaviors by Compressing Suffix Gradient into Prefix Controller
Min Cai
Yuchen Zhang
Shichang Zhang
Fan Yin
Difan Zou
Yisong Yue
Ziniu Hu
30
0
0
04 Jun 2024
Evaluating Mathematical Reasoning of Large Language Models: A Focus on
  Error Identification and Correction
Evaluating Mathematical Reasoning of Large Language Models: A Focus on Error Identification and Correction
Xiaoyuan Li
Wenjie Wang
Moxin Li
Junrong Guo
Yang Zhang
Fuli Feng
ELM
LRM
38
15
0
02 Jun 2024
ALI-Agent: Assessing LLMs' Alignment with Human Values via Agent-based
  Evaluation
ALI-Agent: Assessing LLMs' Alignment with Human Values via Agent-based Evaluation
Jingnan Zheng
Han Wang
An Zhang
Tai D. Nguyen
Jun Sun
Tat-Seng Chua
LLMAG
40
14
0
23 May 2024
CrossCheckGPT: Universal Hallucination Ranking for Multimodal Foundation
  Models
CrossCheckGPT: Universal Hallucination Ranking for Multimodal Foundation Models
Guangzhi Sun
Potsawee Manakul
Adian Liusie
Kunat Pipatanakul
Chao Zhang
P. Woodland
Mark J. F. Gales
HILM
MLLM
22
7
0
22 May 2024
Can Language Models Explain Their Own Classification Behavior?
Can Language Models Explain Their Own Classification Behavior?
Dane Sherburn
Bilal Chughtai
Owain Evans
47
1
0
13 May 2024
Quantifying the Capabilities of LLMs across Scale and Precision
Quantifying the Capabilities of LLMs across Scale and Precision
Sher Badshah
Hassan Sajjad
40
11
0
06 May 2024
Vibe-Eval: A hard evaluation suite for measuring progress of multimodal
  language models
Vibe-Eval: A hard evaluation suite for measuring progress of multimodal language models
Piotr Padlewski
Max Bain
Matthew Henderson
Zhongkai Zhu
Nishant Relan
...
Che Zheng
Cyprien de Masson dÁutume
Dani Yogatama
Mikel Artetxe
Yi Tay
VLM
87
26
0
03 May 2024
123
Next