Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2209.14010
Cited By
Argumentative Reward Learning: Reasoning About Human Preferences
28 September 2022
Francis Rhys Ward
Francesco Belardinelli
Francesca Toni
HAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Argumentative Reward Learning: Reasoning About Human Preferences"
4 / 4 papers shown
Title
REFINER: Reasoning Feedback on Intermediate Representations
Debjit Paul
Mete Ismayilzada
Maxime Peyrard
Beatriz Borges
Antoine Bosselut
Robert West
Boi Faltings
ReLM
LRM
26
170
0
04 Apr 2023
Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans
John J. Nay
ELM
AILaw
88
27
0
14 Sep 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
313
11,915
0
04 Mar 2022
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
280
1,587
0
18 Sep 2019
1