ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.17482
  4. Cited By
Reinforcement Learning from Human Feedback: Whose Culture, Whose Values, Whose Perspectives?

Reinforcement Learning from Human Feedback: Whose Culture, Whose Values, Whose Perspectives?

20 January 2025
Kristian González Barman
Simon Lohse
H. Regt
    OffRL
ArXivPDFHTML

Papers citing "Reinforcement Learning from Human Feedback: Whose Culture, Whose Values, Whose Perspectives?"

1 / 1 papers shown
Title
Fine-Tuning Language Models from Human Preferences
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
301
1,616
0
18 Sep 2019
1