ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.12805
  4. Cited By
Contextual Moral Value Alignment Through Context-Based Aggregation

Contextual Moral Value Alignment Through Context-Based Aggregation

19 March 2024
Pierre Dognin
Jesus Rios
Ronny Luss
Inkit Padhi
Matthew D Riemer
Miao Liu
P. Sattigeri
Manish Nagireddy
Kush R. Varshney
Djallel Bouneffouf
ArXivPDFHTML

Papers citing "Contextual Moral Value Alignment Through Context-Based Aggregation"

5 / 5 papers shown
Title
Enabling Realtime Reinforcement Learning at Scale with Staggered
  Asynchronous Inference
Enabling Realtime Reinforcement Learning at Scale with Staggered Asynchronous Inference
Matthew D Riemer
G. Subbaraj
Glen Berseth
Irina Rish
OffRL
82
1
0
18 Dec 2024
MAP: Multi-Human-Value Alignment Palette
MAP: Multi-Human-Value Alignment Palette
Xinran Wang
Qi Le
A. N. Ahmed
Enmao Diao
Yi Zhou
Nathalie Baracaldo
Jie Ding
Ali Anwar
16
2
0
24 Oct 2024
Alignment Studio: Aligning Large Language Models to Particular
  Contextual Regulations
Alignment Studio: Aligning Large Language Models to Particular Contextual Regulations
Swapnaja Achintalwar
Ioana Baldini
Djallel Bouneffouf
Joan Byamugisha
Maria Chang
...
P. Sattigeri
Moninder Singh
S. Thwala
Rosario A. Uceda-Sosa
Kush R. Varshney
50
4
0
08 Mar 2024
Decolonial AI Alignment: Openness, Viśe\d{s}a-Dharma, and Including
  Excluded Knowledges
Decolonial AI Alignment: Openness, Viśe\d{s}a-Dharma, and Including Excluded Knowledges
Kush R. Varshney
44
2
0
10 Sep 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
366
12,003
0
04 Mar 2022
1