Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2409.04556
Cited By
How Does Code Pretraining Affect Language Model Task Performance?
6 September 2024
Jackson Petty
Sjoerd van Steenkiste
Tal Linzen
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How Does Code Pretraining Affect Language Model Task Performance?"
8 / 8 papers shown
Title
Mining Hidden Thoughts from Texts: Evaluating Continual Pretraining with Synthetic Data for LLM Reasoning
Yoichi Ishibashi
Taro Yano
Masafumi Oyamada
SyDa
LRM
44
0
0
15 May 2025
Trillion 7B Technical Report
Sungjun Han
Juyoung Suk
Suyeong An
Hyungguk Kim
Kyuseok Kim
Wonsuk Yang
Seungtaek Choi
Jamin Shin
116
1
0
21 Apr 2025
Echo Chamber: RL Post-training Amplifies Behaviors Learned in Pretraining
Rosie Zhao
Alexandru Meterez
Sham Kakade
C. Pehlevan
Samy Jelassi
Eran Malach
ReLM
LRM
112
2
0
10 Apr 2025
Rethinking Multilingual Continual Pretraining: Data Mixing for Adapting LLMs Across Languages and Resources
Zihao Li
Shaoxiong Ji
Hengyu Luo
Jörg Tiedemann
CLL
128
0
0
05 Apr 2025
Not-Just-Scaling Laws: Towards a Better Understanding of the Downstream Impact of Language Model Design Decisions
E. Liu
Amanda Bertsch
Lintang Sutawika
Lindia Tjuatja
Patrick Fernandes
...
Shri Kiran Srinivasan
Carolin (Haas) Lawrence
Aditi Raghunathan
Kiril Gashteovski
Graham Neubig
90
0
0
05 Mar 2025
General Reasoning Requires Learning to Reason from the Get-go
Seungwook Han
Jyothish Pari
Samuel J. Gershman
Pulkit Agrawal
LRM
157
0
0
26 Feb 2025
IPO: Your Language Model is Secretly a Preference Classifier
Shivank Garg
Ayush Singh
Shweta Singh
Paras Chopra
145
1
0
22 Feb 2025
Uncovering Autoregressive LLM Knowledge of Thematic Fit in Event Representation
Safeyah Khaled Alshemali
Daniel Bauer
Yuval Marton
BDL
138
0
0
19 Oct 2024
1