ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.08073
  4. Cited By
Constitutional AI: Harmlessness from AI Feedback

Constitutional AI: Harmlessness from AI Feedback

15 December 2022
Yuntao Bai
Saurav Kadavath
Sandipan Kundu
Amanda Askell
John Kernion
Andy Jones
A. Chen
Anna Goldie
Azalia Mirhoseini
C. McKinnon
Carol Chen
Catherine Olsson
C. Olah
Danny Hernandez
Dawn Drain
Deep Ganguli
Dustin Li
Eli Tran-Johnson
E. Perez
Jamie Kerr
J. Mueller
Jeff Ladish
J. Landau
Kamal Ndousse
Kamilė Lukošiūtė
Liane Lovitt
Michael Sellitto
Nelson Elhage
Nicholas Schiefer
Noemí Mercado
Nova Dassarma
R. Lasenby
Robin Larson
Sam Ringer
Scott R. Johnston
Shauna Kravec
S. E. Showk
Stanislav Fort
Tamera Lanham
Timothy Telleen-Lawton
Tom Conerly
T. Henighan
Tristan Hume
Sam Bowman
Zac Hatfield-Dodds
Benjamin Mann
Dario Amodei
Nicholas Joseph
Sam McCandlish
Tom B. Brown
Jared Kaplan
    SyDa
    MoMe
ArXivPDFHTML

Papers citing "Constitutional AI: Harmlessness from AI Feedback"

50 / 1,106 papers shown
Title
Inverse-Q*: Token Level Reinforcement Learning for Aligning Large
  Language Models Without Preference Data
Inverse-Q*: Token Level Reinforcement Learning for Aligning Large Language Models Without Preference Data
Han Xia
Songyang Gao
Qiming Ge
Zhiheng Xi
Qi Zhang
Xuanjing Huang
38
4
0
27 Aug 2024
Explainable Concept Generation through Vision-Language Preference
  Learning
Explainable Concept Generation through Vision-Language Preference Learning
Aditya Taparia
Som Sagar
Ransalu Senanayake
FAtt
29
2
0
24 Aug 2024
Personality Alignment of Large Language Models
Personality Alignment of Large Language Models
Minjun Zhu
Linyi Yang
Yue Zhang
Yue Zhang
ALM
67
5
0
21 Aug 2024
QPO: Query-dependent Prompt Optimization via Multi-Loop Offline
  Reinforcement Learning
QPO: Query-dependent Prompt Optimization via Multi-Loop Offline Reinforcement Learning
Yilun Kong
Hangyu Mao
Qi Zhao
Bin Zhang
Jingqing Ruan
Li Shen
Yongzhe Chang
Xueqian Wang
Rui Zhao
Dacheng Tao
OffRL
36
1
0
20 Aug 2024
Value Alignment from Unstructured Text
Value Alignment from Unstructured Text
Inkit Padhi
K. Ramamurthy
P. Sattigeri
Manish Nagireddy
Pierre L. Dognin
Kush R. Varshney
32
0
0
19 Aug 2024
Personalizing Reinforcement Learning from Human Feedback with
  Variational Preference Learning
Personalizing Reinforcement Learning from Human Feedback with Variational Preference Learning
S. Poddar
Yanming Wan
Hamish Ivison
Abhishek Gupta
Natasha Jaques
37
35
0
19 Aug 2024
Importance Weighting Can Help Large Language Models Self-Improve
Importance Weighting Can Help Large Language Models Self-Improve
Chunyang Jiang
Chi-min Chan
Wei Xue
Qifeng Liu
Yike Guo
35
3
0
19 Aug 2024
How Susceptible are LLMs to Influence in Prompts?
How Susceptible are LLMs to Influence in Prompts?
Sotiris Anagnostidis
Jannis Bulian
LRM
38
16
0
17 Aug 2024
Automated Design of Agentic Systems
Automated Design of Agentic Systems
Shengran Hu
Cong Lu
Jeff Clune
AI4CE
42
41
0
15 Aug 2024
Agent Q: Advanced Reasoning and Learning for Autonomous AI Agents
Agent Q: Advanced Reasoning and Learning for Autonomous AI Agents
Pranav Putta
Edmund Mills
Naman Garg
S. Motwani
Chelsea Finn
Divyansh Garg
Rafael Rafailov
LLMAG
LRM
28
66
0
13 Aug 2024
Anchored Preference Optimization and Contrastive Revisions: Addressing
  Underspecification in Alignment
Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment
Karel DÓosterlinck
Winnie Xu
Chris Develder
Thomas Demeester
A. Singh
Christopher Potts
Douwe Kiela
Shikib Mehri
38
10
0
12 Aug 2024
Document-Level Event Extraction with Definition-Driven ICL
Document-Level Event Extraction with Definition-Driven ICL
Zhuoyuan Liu
Yilin Luo
78
1
0
10 Aug 2024
Can a Bayesian Oracle Prevent Harm from an Agent?
Can a Bayesian Oracle Prevent Harm from an Agent?
Yoshua Bengio
Michael K. Cohen
Nikolay Malkin
Matt MacDermott
Damiano Fornasiere
Pietro Greiner
Younesse Kaddar
45
4
0
09 Aug 2024
EnJa: Ensemble Jailbreak on Large Language Models
EnJa: Ensemble Jailbreak on Large Language Models
Jiahao Zhang
Zilong Wang
Ruofan Wang
Xingjun Ma
Yu-Gang Jiang
AAML
36
1
0
07 Aug 2024
On the Generalization of Preference Learning with DPO
On the Generalization of Preference Learning with DPO
Shawn Im
Yixuan Li
52
1
0
06 Aug 2024
Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models
Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models
Zi Liang
Haibo Hu
Qingqing Ye
Yaxin Xiao
Haoyang Li
AAML
ELM
SILM
48
6
0
05 Aug 2024
Dialog Flow Induction for Constrainable LLM-Based Chatbots
Dialog Flow Induction for Constrainable LLM-Based Chatbots
Stuti Agrawal
Nishi Uppuluri
Pranav Pillai
R. Reddy
Zoey Li
Gokhan Tur
Dilek Z. Hakkani-Tür
Heng Ji
42
1
0
03 Aug 2024
Mission Impossible: A Statistical Perspective on Jailbreaking LLMs
Mission Impossible: A Statistical Perspective on Jailbreaking LLMs
Jingtong Su
Mingyu Lee
SangKeun Lee
43
8
0
02 Aug 2024
Small Molecule Optimization with Large Language Models
Small Molecule Optimization with Large Language Models
Philipp Guevorguian
Menua Bedrosian
Tigran Fahradyan
Gayane Chilingaryan
Hrant Khachatrian
Armen Aghajanyan
40
1
0
26 Jul 2024
Right Now, Wrong Then: Non-Stationary Direct Preference Optimization
  under Preference Drift
Right Now, Wrong Then: Non-Stationary Direct Preference Optimization under Preference Drift
Seongho Son
William Bankes
Sayak Ray Chowdhury
Brooks Paige
Ilija Bogunovic
39
4
0
26 Jul 2024
Self-Directed Synthetic Dialogues and Revisions Technical Report
Self-Directed Synthetic Dialogues and Revisions Technical Report
Nathan Lambert
Hailey Schoelkopf
Aaron Gokaslan
Luca Soldaini
Valentina Pyatkin
Louis Castricato
SyDa
45
3
0
25 Jul 2024
The Dark Side of Function Calling: Pathways to Jailbreaking Large
  Language Models
The Dark Side of Function Calling: Pathways to Jailbreaking Large Language Models
Zihui Wu
Haichang Gao
Jianping He
Ping Wang
29
6
0
25 Jul 2024
Course-Correction: Safety Alignment Using Synthetic Preferences
Course-Correction: Safety Alignment Using Synthetic Preferences
Rongwu Xu
Yishuo Cai
Zhenhong Zhou
Renjie Gu
Haiqin Weng
Yan Liu
Lei Bai
Wei Xu
Han Qiu
29
4
0
23 Jul 2024
Boosting Reward Model with Preference-Conditional Multi-Aspect Synthetic Data Generation
Boosting Reward Model with Preference-Conditional Multi-Aspect Synthetic Data Generation
Jiaming Shen
Ran Xu
Yennie Jun
Zhen Qin
Tianqi Liu
Carl Yang
Yi Liang
Simon Baumgartner
Michael Bendersky
SyDa
64
4
0
22 Jul 2024
Consent in Crisis: The Rapid Decline of the AI Data Commons
Consent in Crisis: The Rapid Decline of the AI Data Commons
Shayne Longpre
Robert Mahari
Ariel N. Lee
Campbell Lund
Hamidah Oderinwale
...
Hanlin Li
Daphne Ippolito
Sara Hooker
Jad Kabbara
Sandy Pentland
69
36
0
20 Jul 2024
Improving Context-Aware Preference Modeling for Language Models
Improving Context-Aware Preference Modeling for Language Models
Silviu Pitis
Ziang Xiao
Nicolas Le Roux
Alessandro Sordoni
38
8
0
20 Jul 2024
Internal Consistency and Self-Feedback in Large Language Models: A
  Survey
Internal Consistency and Self-Feedback in Large Language Models: A Survey
Xun Liang
Shichao Song
Zifan Zheng
Hanyu Wang
Qingchen Yu
...
Rong-Hua Li
Peng Cheng
Zhonghao Wang
Feiyu Xiong
Zhiyu Li
HILM
LRM
65
25
0
19 Jul 2024
Clinical Reading Comprehension with Encoder-Decoder Models Enhanced by
  Direct Preference Optimization
Clinical Reading Comprehension with Encoder-Decoder Models Enhanced by Direct Preference Optimization
Md Sultan al Nahian
R. Kavuluru
MedIm
AI4CE
39
0
0
19 Jul 2024
Decomposed Direct Preference Optimization for Structure-Based Drug
  Design
Decomposed Direct Preference Optimization for Structure-Based Drug Design
Xiwei Cheng
Xiangxin Zhou
Yuwei Yang
Yu Bao
Quanquan Gu
38
3
0
19 Jul 2024
Learning Goal-Conditioned Representations for Language Reward Models
Learning Goal-Conditioned Representations for Language Reward Models
Vaskar Nath
Dylan Slack
Jeff Da
Yuntao Ma
Hugh Zhang
Spencer Whitehead
Sean Hendryx
29
0
0
18 Jul 2024
Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion
  Models: A Tutorial and Review
Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review
Masatoshi Uehara
Yulai Zhao
Tommaso Biancalani
Sergey Levine
63
22
0
18 Jul 2024
Prover-Verifier Games improve legibility of LLM outputs
Prover-Verifier Games improve legibility of LLM outputs
Jan Hendrik Kirchner
Yining Chen
Harri Edwards
Jan Leike
Nat McAleese
Yuri Burda
LRM
AAML
25
24
0
18 Jul 2024
Weak-to-Strong Reasoning
Weak-to-Strong Reasoning
Yuqing Yang
Yan Ma
Pengfei Liu
LRM
37
13
0
18 Jul 2024
Building an Ethical and Trustworthy Biomedical AI Ecosystem for the
  Translational and Clinical Integration of Foundational Models
Building an Ethical and Trustworthy Biomedical AI Ecosystem for the Translational and Clinical Integration of Foundational Models
Simha Sankar Baradwaj
Destiny Gilliland
Jack Rincon
Henning Hermjakob
Yu Yan
...
Dean Wang
Karol Watson
Alex Bui
Wei Wang
Peipei Ping
48
5
0
18 Jul 2024
Analyzing the Generalization and Reliability of Steering Vectors
Analyzing the Generalization and Reliability of Steering Vectors
Daniel Tan
David Chanin
Aengus Lynch
Dimitrios Kanoulas
Brooks Paige
Adrià Garriga-Alonso
Robert Kirk
LLMSV
84
16
0
17 Jul 2024
Thorns and Algorithms: Navigating Generative AI Challenges Inspired by
  Giraffes and Acacias
Thorns and Algorithms: Navigating Generative AI Challenges Inspired by Giraffes and Acacias
Waqar Hussain
43
0
0
16 Jul 2024
BadRobot: Jailbreaking Embodied LLMs in the Physical World
BadRobot: Jailbreaking Embodied LLMs in the Physical World
Hangtao Zhang
Chenyu Zhu
Xianlong Wang
Ziqi Zhou
Yichen Wang
...
Shengshan Hu
Leo Yu Zhang
Aishan Liu
Peijin Guo
Leo Yu Zhang
LM&Ro
53
7
0
16 Jul 2024
Bringing AI Participation Down to Scale: A Comment on Open AIs Democratic Inputs to AI Project
Bringing AI Participation Down to Scale: A Comment on Open AIs Democratic Inputs to AI Project
David Moats
Chandrima Ganguly
VLM
40
0
0
16 Jul 2024
Qwen2 Technical Report
Qwen2 Technical Report
An Yang
Baosong Yang
Binyuan Hui
Jian Xu
Bowen Yu
...
Yuqiong Liu
Zeyu Cui
Zhenru Zhang
Zhifang Guo
Zhi-Wei Fan
OSLM
VLM
MU
60
792
0
15 Jul 2024
Refuse Whenever You Feel Unsafe: Improving Safety in LLMs via Decoupled
  Refusal Training
Refuse Whenever You Feel Unsafe: Improving Safety in LLMs via Decoupled Refusal Training
Youliang Yuan
Wenxiang Jiao
Wenxuan Wang
Jen-tse Huang
Jiahao Xu
Tian Liang
Pinjia He
Zhaopeng Tu
45
19
0
12 Jul 2024
New Desiderata for Direct Preference Optimization
New Desiderata for Direct Preference Optimization
Xiangkun Hu
Tong He
David Wipf
44
2
0
12 Jul 2024
On LLM Wizards: Identifying Large Language Models' Behaviors for Wizard
  of Oz Experiments
On LLM Wizards: Identifying Large Language Models' Behaviors for Wizard of Oz Experiments
Jingchao Fang
Nikos Aréchiga
Keiichi Namaoshi
N. Bravo
Candice L Hogan
David A. Shamma
44
3
0
10 Jul 2024
Self-Recognition in Language Models
Self-Recognition in Language Models
Tim R. Davidson
Viacheslav Surkov
V. Veselovsky
Giuseppe Russo
Robert West
Çağlar Gülçehre
PILM
248
2
0
09 Jul 2024
ChatGPT Doesn't Trust Chargers Fans: Guardrail Sensitivity in Context
ChatGPT Doesn't Trust Chargers Fans: Guardrail Sensitivity in Context
Victoria R. Li
Yida Chen
Naomi Saphra
43
3
0
09 Jul 2024
Virtual Personas for Language Models via an Anthology of Backstories
Virtual Personas for Language Models via an Anthology of Backstories
Suhong Moon
Marwa Abdulhai
Minwoo Kang
Joseph Suh
Widyadewi Soedarmadji
Eran Kohen Behar
David M. Chan
49
11
0
09 Jul 2024
Prompting Techniques for Secure Code Generation: A Systematic Investigation
Prompting Techniques for Secure Code Generation: A Systematic Investigation
Catherine Tony
Nicolás E. Díaz Ferreyra
Markus Mutas
Salem Dhiff
Riccardo Scandariato
SILM
76
9
0
09 Jul 2024
ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language
  Models
ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models
Yuzhe Gu
Ziwei Ji
Wenwei Zhang
Chengqi Lyu
Dahua Lin
Kai Chen
HILM
39
5
0
05 Jul 2024
Spontaneous Reward Hacking in Iterative Self-Refinement
Spontaneous Reward Hacking in Iterative Self-Refinement
Jane Pan
He He
Samuel R. Bowman
Shi Feng
40
10
0
05 Jul 2024
Improving Sample Efficiency of Reinforcement Learning with Background
  Knowledge from Large Language Models
Improving Sample Efficiency of Reinforcement Learning with Background Knowledge from Large Language Models
Fuxiang Zhang
Junyou Li
Yi-Chen Li
Zongzhang Zhang
Yang Yu
Deheng Ye
OffRL
KELM
47
1
0
04 Jul 2024
InternLM-XComposer-2.5: A Versatile Large Vision Language Model
  Supporting Long-Contextual Input and Output
InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Pan Zhang
Xiaoyi Dong
Yuhang Zang
Yuhang Cao
Rui Qian
...
Kai Chen
Jifeng Dai
Yu Qiao
Dahua Lin
Jiaqi Wang
45
100
0
03 Jul 2024
Previous
123...678...212223
Next