ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.16377
  4. Cited By
On the Transformations across Reward Model, Parameter Update, and
  In-Context Prompt

On the Transformations across Reward Model, Parameter Update, and In-Context Prompt

24 June 2024
Deng Cai
Huayang Li
Tingchen Fu
Siheng Li
Weiwen Xu
Shuaiyi Li
Bowen Cao
Zhisong Zhang
Xinting Huang
Leyang Cui
Yan Wang
Lemao Liu
Taro Watanabe
Shuming Shi
    KELM
ArXivPDFHTML

Papers citing "On the Transformations across Reward Model, Parameter Update, and In-Context Prompt"

29 / 29 papers shown
Title
LiPO: Listwise Preference Optimization through Learning-to-Rank
LiPO: Listwise Preference Optimization through Learning-to-Rank
Tianqi Liu
Zhen Qin
Junru Wu
Jiaming Shen
Misha Khalman
...
Mohammad Saleh
Simon Baumgartner
Jialu Liu
Peter J. Liu
Xuanhui Wang
210
54
0
28 Jan 2025
WildChat: 1M ChatGPT Interaction Logs in the Wild
WildChat: 1M ChatGPT Interaction Logs in the Wild
Wenting Zhao
Xiang Ren
Jack Hessel
Claire Cardie
Yejin Choi
Yuntian Deng
63
208
0
02 May 2024
RLVF: Learning from Verbal Feedback without Overgeneralization
RLVF: Learning from Verbal Feedback without Overgeneralization
Moritz Stephan
Alexander Khazatsky
Eric Mitchell
Annie S. Chen
Sheryl Hsu
Archit Sharma
Chelsea Finn
59
12
0
16 Feb 2024
V-STaR: Training Verifiers for Self-Taught Reasoners
V-STaR: Training Verifiers for Self-Taught Reasoners
Arian Hosseini
Xingdi Yuan
Nikolay Malkin
Rameswar Panda
Alessandro Sordoni
Rishabh Agarwal
ReLM
LRM
64
119
0
09 Feb 2024
KTO: Model Alignment as Prospect Theoretic Optimization
KTO: Model Alignment as Prospect Theoretic Optimization
Kawin Ethayarajh
Winnie Xu
Niklas Muennighoff
Dan Jurafsky
Douwe Kiela
199
510
0
02 Feb 2024
Tradeoffs Between Alignment and Helpfulness in Language Models with Steering Methods
Tradeoffs Between Alignment and Helpfulness in Language Models with Steering Methods
Yotam Wolf
Noam Wies
Dorin Shteyman
Binyamin Rothberg
Yoav Levine
Amnon Shashua
LLMSV
78
13
0
29 Jan 2024
Language Model Alignment with Elastic Reset
Language Model Alignment with Elastic Reset
Michael Noukhovitch
Samuel Lavoie
Florian Strub
Aaron Courville
KELM
120
25
0
06 Dec 2023
Let's Reinforce Step by Step
Let's Reinforce Step by Step
Sarah Pan
Vladislav Lialin
Sherin Muckatira
Anna Rumshisky
ReLM
LRM
38
8
0
10 Nov 2023
Zephyr: Direct Distillation of LM Alignment
Zephyr: Direct Distillation of LM Alignment
Lewis Tunstall
E. Beeching
Nathan Lambert
Nazneen Rajani
Kashif Rasul
...
Nathan Habib
Nathan Sarrazin
Omar Sanseviero
Alexander M. Rush
Thomas Wolf
ALM
64
382
0
25 Oct 2023
PromptAgent: Strategic Planning with Language Models Enables
  Expert-level Prompt Optimization
PromptAgent: Strategic Planning with Language Models Enables Expert-level Prompt Optimization
Xinyuan Wang
Chenxi Li
Zhen Wang
Fan Bai
Haotian Luo
Jiayou Zhang
Nebojsa Jojic
Eric P. Xing
Zhiting Hu
88
112
0
25 Oct 2023
An Emulator for Fine-Tuning Large Language Models using Small Language
  Models
An Emulator for Fine-Tuning Large Language Models using Small Language Models
Eric Mitchell
Rafael Rafailov
Archit Sharma
Chelsea Finn
Christopher D. Manning
ALM
55
55
0
19 Oct 2023
A General Theoretical Paradigm to Understand Learning from Human
  Preferences
A General Theoretical Paradigm to Understand Learning from Human Preferences
M. G. Azar
Mark Rowland
Bilal Piot
Daniel Guo
Daniele Calandriello
Michal Valko
Rémi Munos
112
597
0
18 Oct 2023
Reward Model Ensembles Help Mitigate Overoptimization
Reward Model Ensembles Help Mitigate Overoptimization
Thomas Coste
Usman Anwar
Robert Kirk
David M. Krueger
NoLa
ALM
55
128
0
04 Oct 2023
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
Lianmin Zheng
Wei-Lin Chiang
Ying Sheng
Siyuan Zhuang
Zhanghao Wu
...
Dacheng Li
Eric Xing
Haotong Zhang
Joseph E. Gonzalez
Ion Stoica
ALM
OSLM
ELM
236
4,186
0
09 Jun 2023
Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good
  movie, and a good prompt too?
Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too?
Weijia Shi
Xiaochuang Han
Hila Gonen
Ari Holtzman
Yulia Tsvetkov
Luke Zettlemoyer
55
42
0
20 Dec 2022
Constitutional AI: Harmlessness from AI Feedback
Constitutional AI: Harmlessness from AI Feedback
Yuntao Bai
Saurav Kadavath
Sandipan Kundu
Amanda Askell
John Kernion
...
Dario Amodei
Nicholas Joseph
Sam McCandlish
Tom B. Brown
Jared Kaplan
SyDa
MoMe
152
1,583
0
15 Dec 2022
Demystifying Prompts in Language Models via Perplexity Estimation
Demystifying Prompts in Language Models via Perplexity Estimation
Hila Gonen
Srini Iyer
Terra Blevins
Noah A. Smith
Luke Zettlemoyer
LRM
78
205
0
08 Dec 2022
Robustness of Learning from Task Instructions
Robustness of Learning from Task Instructions
Jiasheng Gu
Hongyu Zhao
Hanzi Xu
Liang Nie
Hongyuan Mei
Wenpeng Yin
OOD
65
33
0
07 Dec 2022
Contrastive Decoding: Open-ended Text Generation as Optimization
Contrastive Decoding: Open-ended Text Generation as Optimization
Xiang Lisa Li
Ari Holtzman
Daniel Fried
Percy Liang
Jason Eisner
Tatsunori Hashimoto
Luke Zettlemoyer
M. Lewis
70
348
0
27 Oct 2022
Large Language Models Can Self-Improve
Large Language Models Can Self-Improve
Jiaxin Huang
S. Gu
Le Hou
Yuexin Wu
Xuezhi Wang
Hongkun Yu
Jiawei Han
ReLM
AI4MH
LRM
114
594
0
20 Oct 2022
Quark: Controllable Text Generation with Reinforced Unlearning
Quark: Controllable Text Generation with Reinforced Unlearning
Ximing Lu
Sean Welleck
Jack Hessel
Liwei Jiang
Lianhui Qin
Peter West
Prithviraj Ammanabrolu
Yejin Choi
MU
95
213
0
26 May 2022
A General Language Assistant as a Laboratory for Alignment
A General Language Assistant as a Laboratory for Alignment
Amanda Askell
Yuntao Bai
Anna Chen
Dawn Drain
Deep Ganguli
...
Tom B. Brown
Jack Clark
Sam McCandlish
C. Olah
Jared Kaplan
ALM
94
762
0
01 Dec 2021
DExperts: Decoding-Time Controlled Text Generation with Experts and
  Anti-Experts
DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts
Alisa Liu
Maarten Sap
Ximing Lu
Swabha Swayamdipta
Chandra Bhagavatula
Noah A. Smith
Yejin Choi
MU
85
369
0
07 May 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
460
3,952
0
18 Apr 2021
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Xiang Lisa Li
Percy Liang
174
4,209
0
01 Jan 2021
You Impress Me: Dialogue Generation via Mutual Persona Perception
You Impress Me: Dialogue Generation via Mutual Persona Perception
Qian Liu
Yihong Chen
B. Chen
Jian-Guang Lou
Zixuan Chen
Bin Zhou
Dongmei Zhang
52
166
0
11 Apr 2020
PyTorch: An Imperative Style, High-Performance Deep Learning Library
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke
Sam Gross
Francisco Massa
Adam Lerer
James Bradbury
...
Sasank Chilamkurthy
Benoit Steiner
Lu Fang
Junjie Bai
Soumith Chintala
ODL
277
42,038
0
03 Dec 2019
Proximal Policy Optimization Algorithms
Proximal Policy Optimization Algorithms
John Schulman
Filip Wolski
Prafulla Dhariwal
Alec Radford
Oleg Klimov
OffRL
236
18,685
0
20 Jul 2017
Minimum Risk Training for Neural Machine Translation
Minimum Risk Training for Neural Machine Translation
Shiqi Shen
Yong Cheng
Zhongjun He
W. He
Hua Wu
Maosong Sun
Yang Liu
90
469
0
08 Dec 2015
1