Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2408.00307
Cited By
ABC Align: Large Language Model Alignment for Safety & Accuracy
1 August 2024
Gareth Seneque
Lap-Hang Ho
Peter W. Glynn
Yinyu Ye
Jeffrey Molendijk
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ABC Align: Large Language Model Alignment for Safety & Accuracy"
6 / 6 papers shown
Title
Position: We need responsible, application-driven (RAD) AI research
Sarah Hartman
Cheng Soon Ong
Julia Powles
Petra Kuhnert
42
0
0
07 May 2025
The Platonic Representation Hypothesis
Minyoung Huh
Brian Cheung
Tongzhou Wang
Phillip Isola
85
117
0
13 May 2024
Analyzing and Adapting Large Language Models for Few-Shot Multilingual NLU: Are We There Yet?
E. Razumovskaia
Ivan Vulić
Anna Korhonen
51
6
0
04 Mar 2024
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
411
12,150
0
04 Mar 2022
BBQ: A Hand-Built Bias Benchmark for Question Answering
Alicia Parrish
Angelica Chen
Nikita Nangia
Vishakh Padmakumar
Jason Phang
Jana Thompson
Phu Mon Htut
Sam Bowman
223
381
0
15 Oct 2021
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
301
1,620
0
18 Sep 2019
1