ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.03739
  4. Cited By
Aligning Text-to-Image Diffusion Models with Reward Backpropagation

Aligning Text-to-Image Diffusion Models with Reward Backpropagation

5 October 2023
Mihir Prabhudesai
Anirudh Goyal
Deepak Pathak
Katerina Fragkiadaki
ArXivPDFHTML

Papers citing "Aligning Text-to-Image Diffusion Models with Reward Backpropagation"

41 / 91 papers shown
Title
Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion
  Models: A Tutorial and Review
Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review
Masatoshi Uehara
Yulai Zhao
Tommaso Biancalani
Sergey Levine
63
22
0
18 Jul 2024
MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for
  Text-to-Image Generation?
MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?
Zhaorun Chen
Yichao Du
Zichen Wen
Yiyang Zhou
Chenhang Cui
...
Jiawei Zhou
Zhuokai Zhao
Rafael Rafailov
Chelsea Finn
Huaxiu Yao
EGVM
MLLM
55
29
0
05 Jul 2024
Aligning Diffusion Models with Noise-Conditioned Perception
Aligning Diffusion Models with Noise-Conditioned Perception
Alexander Gambashidze
Anton Kulikov
Yuriy Sosnin
Ilya Makarov
42
5
0
25 Jun 2024
GenAI-Bench: Evaluating and Improving Compositional Text-to-Visual
  Generation
GenAI-Bench: Evaluating and Improving Compositional Text-to-Visual Generation
Baiqi Li
Zhiqiu Lin
Deepak Pathak
Jiayao Li
Yixin Fei
...
Tiffany Ling
Xide Xia
Pengchuan Zhang
Graham Neubig
Deva Ramanan
EGVM
49
25
0
19 Jun 2024
Adding Conditional Control to Diffusion Models with Reinforcement Learning
Adding Conditional Control to Diffusion Models with Reinforcement Learning
Yulai Zhao
Masatoshi Uehara
Gabriele Scalia
Tommaso Biancalani
Sergey Levine
Ehsan Hajiramezanali
Ehsan Hajiramezanali
AI4CE
57
3
0
17 Jun 2024
Margin-aware Preference Optimization for Aligning Diffusion Models
  without Reference
Margin-aware Preference Optimization for Aligning Diffusion Models without Reference
Jiwoo Hong
Sayak Paul
Noah Lee
Kashif Rasul
James Thorne
Jongheon Jeong
43
13
0
10 Jun 2024
Diffusion-RPO: Aligning Diffusion Models through Relative Preference
  Optimization
Diffusion-RPO: Aligning Diffusion Models through Relative Preference Optimization
Yi Gu
Zhendong Wang
Yueqin Yin
Yujia Xie
Mingyuan Zhou
38
15
0
10 Jun 2024
ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise
  Optimization
ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization
L. Eyring
Shyamgopal Karthik
Karsten Roth
Alexey Dosovitskiy
Zeynep Akata
76
17
0
06 Jun 2024
Improving GFlowNets for Text-to-Image Diffusion Alignment
Improving GFlowNets for Text-to-Image Diffusion Alignment
Dinghuai Zhang
Yizhe Zhang
Jiatao Gu
Ruixiang Zhang
J. Susskind
Navdeep Jaitly
Shuangfei Zhai
EGVM
98
7
0
02 Jun 2024
Bridging Model-Based Optimization and Generative Modeling via
  Conservative Fine-Tuning of Diffusion Models
Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models
Masatoshi Uehara
Yulai Zhao
Ehsan Hajiramezanali
Gabriele Scalia
Gökçen Eraslan
Avantika Lal
Sergey Levine
Tommaso Biancalani
50
13
0
30 May 2024
T2V-Turbo: Breaking the Quality Bottleneck of Video Consistency Model
  with Mixed Reward Feedback
T2V-Turbo: Breaking the Quality Bottleneck of Video Consistency Model with Mixed Reward Feedback
Jiachen Li
Weixi Feng
Tsu-jui Fu
Xinyi Wang
Sugato Basu
Wenhu Chen
William Yang Wang
VGen
34
27
0
29 May 2024
Deep Reward Supervisions for Tuning Text-to-Image Diffusion Models
Deep Reward Supervisions for Tuning Text-to-Image Diffusion Models
Xiaoshi Wu
Yiming Hao
Manyuan Zhang
Keqiang Sun
Zhaoyang Huang
Guanglu Song
Yu Liu
Hongsheng Li
EGVM
76
18
0
01 May 2024
PuLID: Pure and Lightning ID Customization via Contrastive Alignment
PuLID: Pure and Lightning ID Customization via Contrastive Alignment
Zinan Guo
Yanze Wu
Zhuowei Chen
Lang Chen
Qian He
DiffM
41
58
0
24 Apr 2024
Multimodal Large Language Model is a Human-Aligned Annotator for
  Text-to-Image Generation
Multimodal Large Language Model is a Human-Aligned Annotator for Text-to-Image Generation
Xun Wu
Shaohan Huang
Furu Wei
44
8
0
23 Apr 2024
Gradient Guidance for Diffusion Models: An Optimization Perspective
Gradient Guidance for Diffusion Models: An Optimization Perspective
Yingqing Guo
Hui Yuan
Yukang Yang
Minshuo Chen
Mengdi Wang
27
20
0
23 Apr 2024
ControlNet++: Improving Conditional Controls with Efficient Consistency
  Feedback
ControlNet++: Improving Conditional Controls with Efficient Consistency Feedback
Ming Li
Taojiannan Yang
Huafeng Kuang
Jie Wu
Zhaoning Wang
Xuefeng Xiao
Cheng Chen
35
63
0
11 Apr 2024
Aligning Diffusion Models by Optimizing Human Utility
Aligning Diffusion Models by Optimizing Human Utility
Shufan Li
Konstantinos Kallidromitis
Akash Gokul
Yusuke Kato
Kazuki Kozuka
107
29
0
06 Apr 2024
Pixel-wise RL on Diffusion Models: Reinforcement Learning from Rich
  Feedback
Pixel-wise RL on Diffusion Models: Reinforcement Learning from Rich Feedback
Mo Kordzanganeh
Danial Keshvary
Nariman Arian
EGVM
28
0
0
05 Apr 2024
TextCraftor: Your Text Encoder Can be Image Quality Controller
TextCraftor: Your Text Encoder Can be Image Quality Controller
Yanyu Li
Xian Liu
Anil Kag
Ju Hu
Yerlan Idelbayev
Dhritiman Sagar
Yanzhi Wang
Sergey Tulyakov
Jian Ren
45
15
0
27 Mar 2024
RL for Consistency Models: Faster Reward Guided Text-to-Image Generation
RL for Consistency Models: Faster Reward Guided Text-to-Image Generation
Owen Oertell
Jonathan D. Chang
Yiyi Zhang
Kianté Brantley
Wen Sun
EGVM
38
4
0
25 Mar 2024
Reward Guided Latent Consistency Distillation
Reward Guided Latent Consistency Distillation
Jiachen Li
Weixi Feng
Wenhu Chen
William Yang Wang
EGVM
25
11
0
16 Mar 2024
SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with
  Auto-Generated Data
SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data
Jialu Li
Jaemin Cho
Yi-Lin Sung
Jaehong Yoon
Mohit Bansal
MoMe
DiffM
41
8
0
11 Mar 2024
Feedback Efficient Online Fine-Tuning of Diffusion Models
Feedback Efficient Online Fine-Tuning of Diffusion Models
Masatoshi Uehara
Yulai Zhao
Kevin Black
Ehsan Hajiramezanali
Gabriele Scalia
N. Diamant
Alex Tseng
Sergey Levine
Tommaso Biancalani
36
21
0
26 Feb 2024
Graph Diffusion Policy Optimization
Graph Diffusion Policy Optimization
Yijing Liu
Chao Du
Tianyu Pang
Chongxuan Li
Wei Chen
Min-Bin Lin
34
7
0
26 Feb 2024
Fine-Tuning of Continuous-Time Diffusion Models as Entropy-Regularized
  Control
Fine-Tuning of Continuous-Time Diffusion Models as Entropy-Regularized Control
Masatoshi Uehara
Yulai Zhao
Kevin Black
Ehsan Hajiramezanali
Gabriele Scalia
N. Diamant
Alex Tseng
Tommaso Biancalani
Sergey Levine
42
42
0
23 Feb 2024
Self-Play Fine-Tuning of Diffusion Models for Text-to-Image Generation
Self-Play Fine-Tuning of Diffusion Models for Text-to-Image Generation
Huizhuo Yuan
Zixiang Chen
Kaixuan Ji
Quanquan Gu
62
24
0
15 Feb 2024
PRDP: Proximal Reward Difference Prediction for Large-Scale Reward
  Finetuning of Diffusion Models
PRDP: Proximal Reward Difference Prediction for Large-Scale Reward Finetuning of Diffusion Models
Fei Deng
Qifei Wang
Wei Wei
Matthias Grundmann
Tingbo Hou
EGVM
17
15
0
13 Feb 2024
Confronting Reward Overoptimization for Diffusion Models: A Perspective
  of Inductive and Primacy Biases
Confronting Reward Overoptimization for Diffusion Models: A Perspective of Inductive and Primacy Biases
Ziyi Zhang
Sen Zhang
Yibing Zhan
Yong Luo
Yonggang Wen
Dacheng Tao
EGVM
41
8
0
13 Feb 2024
A Dense Reward View on Aligning Text-to-Image Diffusion with Preference
A Dense Reward View on Aligning Text-to-Image Diffusion with Preference
Shentao Yang
Tianqi Chen
Mingyuan Zhou
EGVM
32
22
0
13 Feb 2024
Source-Free Domain Adaptation with Diffusion-Guided Source Data
  Generation
Source-Free Domain Adaptation with Diffusion-Guided Source Data Generation
Shivang Chopra
Suraj Kothawade
Houda Aynaou
Aman Chadha
DiffM
37
1
0
07 Feb 2024
DITTO: Diffusion Inference-Time T-Optimization for Music Generation
DITTO: Diffusion Inference-Time T-Optimization for Music Generation
Zachary Novack
Julian McAuley
Taylor Berg-Kirkpatrick
Nicholas J. Bryan
DiffM
26
33
0
22 Jan 2024
Carve3D: Improving Multi-view Reconstruction Consistency for Diffusion
  Models with RL Finetuning
Carve3D: Improving Multi-view Reconstruction Consistency for Diffusion Models with RL Finetuning
Desai Xie
Jiahao Li
Hao Tan
Xin Sun
Zhixin Shu
Yi Zhou
Sai Bi
Soren Pirk
Arie E. Kaufman
29
8
0
21 Dec 2023
InstructVideo: Instructing Video Diffusion Models with Human Feedback
InstructVideo: Instructing Video Diffusion Models with Human Feedback
Hangjie Yuan
Shiwei Zhang
Xiang Wang
Yujie Wei
Tao Feng
Yining Pan
Yingya Zhang
Ziwei Liu
Samuel Albanie
Dong Ni
VGen
24
42
0
19 Dec 2023
InstructBooth: Instruction-following Personalized Text-to-Image
  Generation
InstructBooth: Instruction-following Personalized Text-to-Image Generation
Daewon Chae
Nokyung Park
Jinkyu Kim
Kimin Lee
DiffM
24
11
0
04 Dec 2023
Enhancing Diffusion Models with Text-Encoder Reinforcement Learning
Enhancing Diffusion Models with Text-Encoder Reinforcement Learning
Chaofeng Chen
Annan Wang
Haoning Wu
Liang Liao
Wenxiu Sun
Qiong Yan
Weisi Lin
28
10
0
27 Nov 2023
Reinforcement Learning from Diffusion Feedback: Q* for Image Search
Reinforcement Learning from Diffusion Feedback: Q* for Image Search
Aboli Rajan Marathe
VLM
39
0
0
27 Nov 2023
Diffusion Model Alignment Using Direct Preference Optimization
Diffusion Model Alignment Using Direct Preference Optimization
Bram Wallace
Meihua Dang
Rafael Rafailov
Linqi Zhou
Aaron Lou
Senthil Purushwalkam
Stefano Ermon
Caiming Xiong
Shafiq R. Joty
Nikhil Naik
EGVM
33
227
0
21 Nov 2023
End-to-End Diffusion Latent Optimization Improves Classifier Guidance
End-to-End Diffusion Latent Optimization Improves Classifier Guidance
Bram Wallace
Akash Gokul
Stefano Ermon
Nikhil Naik
124
70
0
23 Mar 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
313
11,953
0
04 Mar 2022
Zero-Shot Text-to-Image Generation
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
255
4,781
0
24 Feb 2021
Fine-Tuning Language Models from Human Preferences
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
280
1,595
0
18 Sep 2019
Previous
12