ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.20087
  4. Cited By
ProgressGym: Alignment with a Millennium of Moral Progress

ProgressGym: Alignment with a Millennium of Moral Progress

28 June 2024
Tianyi Qiu
Yang Zhang
Xuchuan Huang
Jasmine Xinze Li
Yalan Qin
Yaodong Yang
    AI4TS
ArXiv (abs)PDFHTMLGithub (22★)

Papers citing "ProgressGym: Alignment with a Millennium of Moral Progress"

7 / 7 papers shown
Title
Application-Driven Value Alignment in Agentic AI Systems: Survey and Perspectives
Application-Driven Value Alignment in Agentic AI Systems: Survey and Perspectives
Wei Zeng
Hengshu Zhu
Chuan Qin
Han Wu
Yihang Cheng
...
Xiaowei Jin
Yinuo Shen
Zhenxing Wang
Feimin Zhong
Hui Xiong
AI4TS
62
0
0
11 Jun 2025
The Lock-in Hypothesis: Stagnation by Algorithm
The Lock-in Hypothesis: Stagnation by Algorithm
Tianyi Qiu
Zhonghao He
Tejasveer Chugh
Max Kleiman-Weiner
47
0
0
06 Jun 2025
Super Co-alignment of Human and AI for Sustainable Symbiotic Society
Super Co-alignment of Human and AI for Sustainable Symbiotic Society
Yi Zeng
Yijiao Wang
Enmeng Lu
Dongcheng Zhao
Bing Han
...
Chao Liu
Yaodong Yang
Yi Zeng
Boyuan Chen
Jinyu Fan
167
0
0
24 Apr 2025
CLASH: Evaluating Language Models on Judging High-Stakes Dilemmas from Multiple Perspectives
CLASH: Evaluating Language Models on Judging High-Stakes Dilemmas from Multiple Perspectives
Ayoung Lee
Ryan Sungmo Kwon
Peter Railton
Lu Wang
ELM
139
0
0
15 Apr 2025
Amulet: ReAlignment During Test Time for Personalized Preference Adaptation of LLMs
Amulet: ReAlignment During Test Time for Personalized Preference Adaptation of LLMs
Zhaowei Zhang
Fengshuo Bai
Qizhi Chen
Chengdong Ma
Mingzhi Wang
Haoran Sun
Zilong Zheng
Yaodong Yang
167
5
0
26 Feb 2025
Inverse-RLignment: Large Language Model Alignment from Demonstrations through Inverse Reinforcement Learning
Inverse-RLignment: Large Language Model Alignment from Demonstrations through Inverse Reinforcement Learning
Hao Sun
M. Schaar
172
18
0
28 Jan 2025
Verifiable by Design: Aligning Language Models to Quote from Pre-Training Data
Verifiable by Design: Aligning Language Models to Quote from Pre-Training Data
Jingyu Zhang
Marc Marone
Tianjian Li
Benjamin Van Durme
Daniel Khashabi
193
9
0
05 Apr 2024
1