ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.09566
  4. Cited By
Aligning Large Language Models with Counterfactual DPO

Aligning Large Language Models with Counterfactual DPO

17 January 2024
Bradley Butcher
    ALM
ArXivPDFHTML

Papers citing "Aligning Large Language Models with Counterfactual DPO"

4 / 4 papers shown
Title
Constitutional AI: Harmlessness from AI Feedback
Constitutional AI: Harmlessness from AI Feedback
Yuntao Bai
Saurav Kadavath
Sandipan Kundu
Amanda Askell
John Kernion
...
Dario Amodei
Nicholas Joseph
Sam McCandlish
Tom B. Brown
Jared Kaplan
SyDa
MoMe
148
1,552
0
15 Dec 2022
Prompting PaLM for Translation: Assessing Strategies and Performance
Prompting PaLM for Translation: Assessing Strategies and Performance
David Vilar
Markus Freitag
Colin Cherry
Jiaming Luo
Viresh Ratnakar
George F. Foster
LRM
69
158
0
16 Nov 2022
Is Reinforcement Learning (Not) for Natural Language Processing:
  Benchmarks, Baselines, and Building Blocks for Natural Language Policy
  Optimization
Is Reinforcement Learning (Not) for Natural Language Processing: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization
Rajkumar Ramamurthy
Prithviraj Ammanabrolu
Kianté Brantley
Jack Hessel
R. Sifa
Christian Bauckhage
Hannaneh Hajishirzi
Yejin Choi
OffRL
65
243
0
03 Oct 2022
Fine-Tuning Language Models from Human Preferences
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
420
1,664
0
18 Sep 2019
1