Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.01168
Cited By
How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs
3 June 2024
Shumiao Ouyang
Hayong Yun
Xingjian Zheng
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs"
2 / 2 papers shown
Title
Evaluating and Aligning Human Economic Risk Preferences in LLMs
J. Liu
Yi Yang
Kar Yan Tam
64
0
0
09 Mar 2025
Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned
Deep Ganguli
Liane Lovitt
John Kernion
Amanda Askell
Yuntao Bai
...
Nicholas Joseph
Sam McCandlish
C. Olah
Jared Kaplan
Jack Clark
225
444
0
23 Aug 2022
1