Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.12546
Cited By
Policy Optimization with Advantage Regularization for Long-Term Fairness in Decision Systems
22 October 2022
Eric Yang Yu
Zhizhen Qin
Min Kyung Lee
Sicun Gao
OffRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Policy Optimization with Advantage Regularization for Long-Term Fairness in Decision Systems"
8 / 8 papers shown
Title
Fairness in Reinforcement Learning with Bisimulation Metrics
S. Rezaei-Shoshtari
Hanna Yurchyk
Scott Fujimoto
Doina Precup
David Meger
104
0
0
03 Jan 2025
The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models
Alexander Pan
Kush S. Bhatia
Jacob Steinhardt
65
174
0
10 Jan 2022
Stabilizing Neural Control Using Self-Learned Almost Lyapunov Critics
Ya-Chien Chang
Sicun Gao
53
58
0
11 Jul 2021
Learning Fair Policies in Multiobjective (Deep) Reinforcement Learning with Average and Discounted Rewards
Umer Siddique
Paul Weng
Matthieu Zimmer
FaML
OffRL
26
85
0
18 Aug 2020
Convergent Policy Optimization for Safe Reinforcement Learning
Ming Yu
Zhuoran Yang
Mladen Kolar
Zhaoran Wang
40
95
0
26 Oct 2019
The Social Cost of Strategic Classification
S. Milli
John Miller
Anca Dragan
Moritz Hardt
39
179
0
25 Aug 2018
Safe Exploration in Continuous Action Spaces
Gal Dalal
Krishnamurthy Dvijotham
Matej Vecerík
Todd Hester
Cosmin Paduraru
Yuval Tassa
30
435
0
26 Jan 2018
Constrained Policy Optimization
Joshua Achiam
David Held
Aviv Tamar
Pieter Abbeel
88
1,313
0
30 May 2017
1