ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.08073
  4. Cited By
Constitutional AI: Harmlessness from AI Feedback

Constitutional AI: Harmlessness from AI Feedback

15 December 2022
Yuntao Bai
Saurav Kadavath
Sandipan Kundu
Amanda Askell
John Kernion
Andy Jones
A. Chen
Anna Goldie
Azalia Mirhoseini
C. McKinnon
Carol Chen
Catherine Olsson
C. Olah
Danny Hernandez
Dawn Drain
Deep Ganguli
Dustin Li
Eli Tran-Johnson
E. Perez
Jamie Kerr
J. Mueller
Jeff Ladish
J. Landau
Kamal Ndousse
Kamilė Lukošiūtė
Liane Lovitt
Michael Sellitto
Nelson Elhage
Nicholas Schiefer
Noemí Mercado
Nova Dassarma
R. Lasenby
Robin Larson
Sam Ringer
Scott R. Johnston
Shauna Kravec
S. E. Showk
Stanislav Fort
Tamera Lanham
Timothy Telleen-Lawton
Tom Conerly
T. Henighan
Tristan Hume
Sam Bowman
Zac Hatfield-Dodds
Benjamin Mann
Dario Amodei
Nicholas Joseph
Sam McCandlish
Tom B. Brown
Jared Kaplan
    SyDaMoMe
ArXiv (abs)PDFHTML

Papers citing "Constitutional AI: Harmlessness from AI Feedback"

50 / 1,202 papers shown
Title
EAGLE: A Domain Generalization Framework for AI-generated Text Detection
EAGLE: A Domain Generalization Framework for AI-generated Text Detection
Amrita Bhattacharjee
Raha Moraffah
Joshua Garland
Huan Liu
DeLMO
79
7
0
23 Mar 2024
Particip-AI: A Democratic Surveying Framework for Anticipating Future AI
  Use Cases, Harms and Benefits
Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits
Jimin Mun
Liwei Jiang
Jenny T Liang
Inyoung Cheong
Nicole DeCario
Yejin Choi
Tadayoshi Kohno
Maarten Sap
91
7
0
21 Mar 2024
Testing the Limits of Jailbreaking Defenses with the Purple Problem
Testing the Limits of Jailbreaking Defenses with the Purple Problem
Taeyoun Kim
Suhas Kotha
Aditi Raghunathan
AAML
84
6
0
20 Mar 2024
RewardBench: Evaluating Reward Models for Language Modeling
RewardBench: Evaluating Reward Models for Language Modeling
Nathan Lambert
Valentina Pyatkin
Jacob Morrison
Lester James V. Miranda
Bill Yuchen Lin
...
Sachin Kumar
Tom Zick
Yejin Choi
Noah A. Smith
Hanna Hajishirzi
ALM
195
260
0
20 Mar 2024
Chain-of-Interaction: Enhancing Large Language Models for Psychiatric
  Behavior Understanding by Dyadic Contexts
Chain-of-Interaction: Enhancing Large Language Models for Psychiatric Behavior Understanding by Dyadic Contexts
Guangzeng Han
Weisi Liu
Xiaolei Huang
Brian Borsari
76
22
0
20 Mar 2024
Defending Against Indirect Prompt Injection Attacks With Spotlighting
Defending Against Indirect Prompt Injection Attacks With Spotlighting
Keegan Hines
Gary Lopez
Matthew Hall
Federico Zarfati
Yonatan Zunger
Emre Kiciman
AAMLSILM
97
51
0
20 Mar 2024
HYDRA: A Hyper Agent for Dynamic Compositional Visual Reasoning
HYDRA: A Hyper Agent for Dynamic Compositional Visual Reasoning
Fucai Ke
Zhixi Cai
Simindokht Jahangard
Weiqing Wang
P. D. Haghighi
Hamid Rezatofighi
LRM
99
12
0
19 Mar 2024
RigorLLM: Resilient Guardrails for Large Language Models against
  Undesired Content
RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content
Zhuowen Yuan
Zidi Xiong
Yi Zeng
Ning Yu
Ruoxi Jia
Basel Alomair
Yue Liu
AAMLKELM
130
45
0
19 Mar 2024
Interpretable User Satisfaction Estimation for Conversational Systems
  with Large Language Models
Interpretable User Satisfaction Estimation for Conversational Systems with Large Language Models
Ying-Chun Lin
Jennifer Neville
Jack W. Stokes
Longqi Yang
Tara Safavi
...
Xia Song
Georg Buscher
Saurabh Tiwary
Brent J. Hecht
J. Teevan
ELM
127
9
0
19 Mar 2024
Scaling Data Diversity for Fine-Tuning Language Models in Human
  Alignment
Scaling Data Diversity for Fine-Tuning Language Models in Human Alignment
Feifan Song
Bowen Yu
Hao Lang
Haiyang Yu
Fei Huang
Houfeng Wang
Yongbin Li
ALM
75
15
0
17 Mar 2024
PERL: Parameter Efficient Reinforcement Learning from Human Feedback
PERL: Parameter Efficient Reinforcement Learning from Human Feedback
Hakim Sidahmed
Samrat Phatale
Alex Hutcheson
Zhuonan Lin
Zhan Chen
...
Jessica Hoffmann
Hassan Mansoor
Wei Li
Abhinav Rastogi
Lucas Dixon
77
3
0
15 Mar 2024
Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision
Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision
Zhiqing Sun
Longhui Yu
Yikang Shen
Weiyang Liu
Yiming Yang
Sean Welleck
Chuang Gan
93
69
0
14 Mar 2024
CodeUltraFeedback: An LLM-as-a-Judge Dataset for Aligning Large Language
  Models to Coding Preferences
CodeUltraFeedback: An LLM-as-a-Judge Dataset for Aligning Large Language Models to Coding Preferences
Martin Weyssow
Aton Kamanda
H. Sahraoui
ALM
116
38
0
14 Mar 2024
Strengthening Multimodal Large Language Model with Bootstrapped
  Preference Optimization
Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization
Renjie Pi
Tianyang Han
Wei Xiong
Jipeng Zhang
Runtao Liu
Boyao Wang
Tong Zhang
MLLM
137
48
0
13 Mar 2024
SOTOPIA-$π$: Interactive Learning of Socially Intelligent Language
  Agents
SOTOPIA-πππ: Interactive Learning of Socially Intelligent Language Agents
Ruiyi Wang
Haofei Yu
W. Zhang
Zhengyang Qi
Maarten Sap
Graham Neubig
Yonatan Bisk
Hao Zhu
LLMAG
117
44
0
13 Mar 2024
Human Alignment of Large Language Models through Online Preference
  Optimisation
Human Alignment of Large Language Models through Online Preference Optimisation
Daniele Calandriello
Daniel Guo
Rémi Munos
Mark Rowland
Yunhao Tang
...
Michal Valko
Tianqi Liu
Rishabh Joshi
Zeyu Zheng
Bilal Piot
110
67
0
13 Mar 2024
Tastle: Distract Large Language Models for Automatic Jailbreak Attack
Tastle: Distract Large Language Models for Automatic Jailbreak Attack
Zeguan Xiao
Yan Yang
Guanhua Chen
Yun-Nung Chen
AAML
90
27
0
13 Mar 2024
HRLAIF: Improvements in Helpfulness and Harmlessness in Open-domain
  Reinforcement Learning From AI Feedback
HRLAIF: Improvements in Helpfulness and Harmlessness in Open-domain Reinforcement Learning From AI Feedback
Ang Li
Qiugen Xiao
Peng Cao
Jian Tang
Yi Yuan
...
Weidong Guo
Yukang Gan
Jeffrey Xu Yu
D. Wang
Ying Shan
VLMALM
93
10
0
13 Mar 2024
Mastering Text, Code and Math Simultaneously via Fusing Highly
  Specialized Language Models
Mastering Text, Code and Math Simultaneously via Fusing Highly Specialized Language Models
Ning Ding
Yulin Chen
Ganqu Cui
Xingtai Lv
Weilin Zhao
Ruobing Xie
Bowen Zhou
Zhiyuan Liu
Maosong Sun
ALMMoMeAI4CE
150
7
0
13 Mar 2024
From Paper to Card: Transforming Design Implications with Generative AI
From Paper to Card: Transforming Design Implications with Generative AI
Donghoon Shin
Lucy Lu Wang
Gary Hsieh
72
14
0
12 Mar 2024
CodeAttack: Revealing Safety Generalization Challenges of Large Language
  Models via Code Completion
CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion
Qibing Ren
Chang Gao
Jing Shao
Junchi Yan
Xin Tan
Wai Lam
Lizhuang Ma
ALMELMAAML
120
26
0
12 Mar 2024
Improving Reinforcement Learning from Human Feedback Using Contrastive
  Rewards
Improving Reinforcement Learning from Human Feedback Using Contrastive Rewards
Wei Shen
Xiaoying Zhang
Yuanshun Yao
Rui Zheng
Hongyi Guo
Yang Liu
ALM
83
14
0
12 Mar 2024
Alignment Studio: Aligning Large Language Models to Particular
  Contextual Regulations
Alignment Studio: Aligning Large Language Models to Particular Contextual Regulations
Swapnaja Achintalwar
Ioana Baldini
Djallel Bouneffouf
Joan Byamugisha
Maria Chang
...
P. Sattigeri
Moninder Singh
S. Thwala
Rosario A. Uceda-Sosa
Kush R. Varshney
90
5
0
08 Mar 2024
On Protecting the Data Privacy of Large Language Models (LLMs): A Survey
On Protecting the Data Privacy of Large Language Models (LLMs): A Survey
Biwei Yan
Kun Li
Minghui Xu
Yueyan Dong
Yue Zhang
Zhaochun Ren
Xiuzhen Cheng
AILawPILM
147
88
0
08 Mar 2024
ConstitutionalExperts: Training a Mixture of Principle-based Prompts
ConstitutionalExperts: Training a Mixture of Principle-based Prompts
S. Petridis
Ben Wedin
Ann Yuan
James Wexler
Nithum Thain
60
8
0
07 Mar 2024
Teaching Large Language Models to Reason with Reinforcement Learning
Teaching Large Language Models to Reason with Reinforcement Learning
Alex Havrilla
Yuqing Du
Sharath Chandra Raparthy
Christoforos Nalmpantis
Jane Dwivedi-Yu
Maksym Zhuravinskyi
Eric Hambro
Sainbayar Sukhbaatar
Roberta Raileanu
ReLMLRM
111
94
0
07 Mar 2024
Proxy-RLHF: Decoupling Generation and Alignment in Large Language Model
  with Proxy
Proxy-RLHF: Decoupling Generation and Alignment in Large Language Model with Proxy
Yu Zhu
Chuxiong Sun
Wenfei Yang
Wenqiang Wei
Simin Niu
...
Zhiyu Li
Shifeng Zhang
Feiyu Xiong
Jie Hu
Mingchuan Yang
56
3
0
07 Mar 2024
Aligners: Decoupling LLMs and Alignment
Aligners: Decoupling LLMs and Alignment
Lilian Ngweta
Mayank Agarwal
Subha Maity
Alex Gittens
Yuekai Sun
Mikhail Yurochkin
63
2
0
07 Mar 2024
On the Essence and Prospect: An Investigation of Alignment Approaches
  for Big Models
On the Essence and Prospect: An Investigation of Alignment Approaches for Big Models
Xinpeng Wang
Shitong Duan
Xiaoyuan Yi
Jing Yao
Shanlin Zhou
Zhihua Wei
Peng Zhang
Dongkuan Xu
Maosong Sun
Xing Xie
OffRL
120
17
0
07 Mar 2024
Negating Negatives: Alignment without Human Positive Samples via
  Distributional Dispreference Optimization
Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization
Shitong Duan
Xiaoyuan Yi
Peng Zhang
Tun Lu
Xing Xie
Ning Gu
71
7
0
06 Mar 2024
Human vs. Machine: Behavioral Differences Between Expert Humans and
  Language Models in Wargame Simulations
Human vs. Machine: Behavioral Differences Between Expert Humans and Language Models in Wargame Simulations
Max Lamparth
Anthony Corso
Jacob Ganz
O. Mastro
Jacquelyn G. Schneider
Harold Trinkunas
92
9
0
06 Mar 2024
The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning
The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning
Nathaniel Li
Alexander Pan
Anjali Gopal
Summer Yue
Daniel Berrios
...
Yan Shoshitaishvili
Jimmy Ba
K. Esvelt
Alexandr Wang
Dan Hendrycks
ELM
129
195
0
05 Mar 2024
Evaluating and Optimizing Educational Content with Large Language Model
  Judgments
Evaluating and Optimizing Educational Content with Large Language Model Judgments
Joy He-Yueya
Noah D. Goodman
Emma Brunskill
AI4Ed
67
8
0
05 Mar 2024
Towards Training A Chinese Large Language Model for Anesthesiology
Towards Training A Chinese Large Language Model for Anesthesiology
Zhonghai Wang
Jie Jiang
Yibing Zhan
Bohao Zhou
Yanhong Li
...
Liang Ding
Hua Jin
Jun Peng
Xu Lin
Weifeng Liu
LM&MA
71
4
0
05 Mar 2024
DACO: Towards Application-Driven and Comprehensive Data Analysis via
  Code Generation
DACO: Towards Application-Driven and Comprehensive Data Analysis via Code Generation
Xueqing Wu
Rui Zheng
Jingzhen Sha
Te-Lin Wu
Hanyu Zhou
Mohan Tang
Kai-Wei Chang
Nanyun Peng
Haoran Huang
106
2
0
04 Mar 2024
AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks
AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks
Yifan Zeng
Yiran Wu
Xiao Zhang
Huazheng Wang
Qingyun Wu
LLMAGAAML
90
77
0
02 Mar 2024
Accelerating Greedy Coordinate Gradient via Probe Sampling
Accelerating Greedy Coordinate Gradient via Probe Sampling
Yiran Zhao
Wenyue Zheng
Tianle Cai
Xuan Long Do
Kenji Kawaguchi
Anirudh Goyal
Michael Shieh
96
2
0
02 Mar 2024
LLMCRIT: Teaching Large Language Models to Use Criteria
LLMCRIT: Teaching Large Language Models to Use Criteria
Weizhe Yuan
Pengfei Liu
Matthias Gallé
ALM
43
9
0
02 Mar 2024
Controllable Preference Optimization: Toward Controllable
  Multi-Objective Alignment
Controllable Preference Optimization: Toward Controllable Multi-Objective Alignment
Yiju Guo
Ganqu Cui
Lifan Yuan
Ning Ding
Jiexin Wang
...
Ruobing Xie
Jie Zhou
Yankai Lin
Zhiyuan Liu
Maosong Sun
97
66
0
29 Feb 2024
Arithmetic Control of LLMs for Diverse User Preferences: Directional
  Preference Alignment with Multi-Objective Rewards
Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards
Haoxiang Wang
Yong Lin
Wei Xiong
Rui Yang
Shizhe Diao
Shuang Qiu
Han Zhao
Tong Zhang
133
88
0
28 Feb 2024
Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware
  Classification
Few-Shot Fairness: Unveiling LLM's Potential for Fairness-Aware Classification
Garima Chhikara
Anurag Sharma
Kripabandhu Ghosh
Abhijnan Chakraborty
96
14
0
28 Feb 2024
Prospect Personalized Recommendation on Large Language Model-based Agent
  Platform
Prospect Personalized Recommendation on Large Language Model-based Agent Platform
Jizhi Zhang
Keqin Bao
Wenjie Wang
Yang Zhang
Wentao Shi
Wanhong Xu
Fuli Feng
Tat-Seng Chua
LLMAG
103
17
0
28 Feb 2024
LLM Task Interference: An Initial Study on the Impact of Task-Switch in
  Conversational History
LLM Task Interference: An Initial Study on the Impact of Task-Switch in Conversational History
Akash Gupta
Ivaxi Sheth
Vyas Raina
Mark Gales
Mario Fritz
86
6
0
28 Feb 2024
Making Them Ask and Answer: Jailbreaking Large Language Models in Few
  Queries via Disguise and Reconstruction
Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise and Reconstruction
Tong Liu
Yingjie Zhang
Zhe Zhao
Yinpeng Dong
Guozhu Meng
Kai Chen
AAML
111
60
0
28 Feb 2024
SynArtifact: Classifying and Alleviating Artifacts in Synthetic Images
  via Vision-Language Model
SynArtifact: Classifying and Alleviating Artifacts in Synthetic Images via Vision-Language Model
Bin Cao
Jianhao Yuan
Yexin Liu
Jian Li
Shuyang Sun
Jing Liu
Bo Zhao
DiffM
108
9
0
28 Feb 2024
On the Challenges and Opportunities in Generative AI
On the Challenges and Opportunities in Generative AI
Laura Manduchi
Kushagra Pandey
Robert Bamler
Ryan Cotterell
Sina Daubener
...
F. Wenzel
Frank Wood
Stephan Mandt
Vincent Fortuin
Vincent Fortuin
288
22
0
28 Feb 2024
Self-Refinement of Language Models from External Proxy Metrics Feedback
Self-Refinement of Language Models from External Proxy Metrics Feedback
Keshav Ramji
Young-Suk Lee
Ramón Fernandez Astudillo
M. Sultan
Tahira Naseem
Asim Munawar
Radu Florian
Salim Roukos
HILM
77
7
0
27 Feb 2024
SoFA: Shielded On-the-fly Alignment via Priority Rule Following
SoFA: Shielded On-the-fly Alignment via Priority Rule Following
Xinyu Lu
Bowen Yu
Yaojie Lu
Hongyu Lin
Haiyang Yu
Le Sun
Xianpei Han
Yongbin Li
123
14
0
27 Feb 2024
Speak Out of Turn: Safety Vulnerability of Large Language Models in
  Multi-turn Dialogue
Speak Out of Turn: Safety Vulnerability of Large Language Models in Multi-turn Dialogue
Zhenhong Zhou
Jiuyang Xiang
Haopeng Chen
Quan Liu
Zherui Li
Sen Su
102
25
0
27 Feb 2024
Fact-and-Reflection (FaR) Improves Confidence Calibration of Large
  Language Models
Fact-and-Reflection (FaR) Improves Confidence Calibration of Large Language Models
Xinran Zhao
Hongming Zhang
Xiaoman Pan
Wenlin Yao
Dong Yu
Tongshuang Wu
Jianshu Chen
HILMLRM
71
7
0
27 Feb 2024
Previous
123...131415...232425
Next