ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.03580
  4. Cited By
A Model Selection Approach for Corruption Robust Reinforcement Learning

A Model Selection Approach for Corruption Robust Reinforcement Learning

31 December 2024
Chen-Yu Wei
Christoph Dann
Julian Zimmert
ArXivPDFHTML

Papers citing "A Model Selection Approach for Corruption Robust Reinforcement Learning"

14 / 14 papers shown
Title
MetaCURL: Non-stationary Concave Utility Reinforcement Learning
MetaCURL: Non-stationary Concave Utility Reinforcement Learning
B. Moreno
Margaux Brégère
Pierre Gaillard
Nadia Oudjane
OffRL
39
0
0
30 May 2024
Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback
Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback
Qiwei Di
Jiafan He
Quanquan Gu
29
1
0
16 Apr 2024
Distributionally Robust Reinforcement Learning with Interactive Data
  Collection: Fundamental Hardness and Near-Optimal Algorithm
Distributionally Robust Reinforcement Learning with Interactive Data Collection: Fundamental Hardness and Near-Optimal Algorithm
Miao Lu
Han Zhong
Tong Zhang
Jose H. Blanchet
OffRL
OOD
73
4
0
04 Apr 2024
Robust Lipschitz Bandits to Adversarial Corruptions
Robust Lipschitz Bandits to Adversarial Corruptions
Yue Kang
Cho-Jui Hsieh
T. C. Lee
AAML
30
8
0
29 May 2023
Policy Resilience to Environment Poisoning Attacks on Reinforcement
  Learning
Policy Resilience to Environment Poisoning Attacks on Reinforcement Learning
Hang Xu
Xinghua Qu
Zinovi Rabinovich
26
1
0
24 Apr 2023
Does Sparsity Help in Learning Misspecified Linear Bandits?
Does Sparsity Help in Learning Misspecified Linear Bandits?
Jialin Dong
Lin F. Yang
22
1
0
29 Mar 2023
A Blackbox Approach to Best of Both Worlds in Bandits and Beyond
A Blackbox Approach to Best of Both Worlds in Bandits and Beyond
Christoph Dann
Chen-Yu Wei
Julian Zimmert
24
22
0
20 Feb 2023
Corruption-Robust Algorithms with Uncertainty Weighting for Nonlinear
  Contextual Bandits and Markov Decision Processes
Corruption-Robust Algorithms with Uncertainty Weighting for Nonlinear Contextual Bandits and Markov Decision Processes
Chen Ye
Wei Xiong
Quanquan Gu
Tong Zhang
25
29
0
12 Dec 2022
When is Realizability Sufficient for Off-Policy Reinforcement Learning?
When is Realizability Sufficient for Off-Policy Reinforcement Learning?
Andrea Zanette
OffRL
16
14
0
10 Nov 2022
Contexts can be Cheap: Solving Stochastic Contextual Bandits with Linear
  Bandit Algorithms
Contexts can be Cheap: Solving Stochastic Contextual Bandits with Linear Bandit Algorithms
Osama A. Hanna
Lin F. Yang
Christina Fragouli
27
11
0
08 Nov 2022
Best of Both Worlds Model Selection
Best of Both Worlds Model Selection
Aldo Pacchiano
Christoph Dann
Claudio Gentile
18
10
0
29 Jun 2022
Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial
  Corruptions
Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial Corruptions
Jiafan He
Dongruo Zhou
Tong Zhang
Quanquan Gu
66
46
0
13 May 2022
Corralling a Larger Band of Bandits: A Case Study on Switching Regret
  for Linear Bandits
Corralling a Larger Band of Bandits: A Case Study on Switching Regret for Linear Bandits
Haipeng Luo
Mengxiao Zhang
Peng Zhao
Zhi-Hua Zhou
26
17
0
12 Feb 2022
Provably Efficient Reinforcement Learning with Linear Function
  Approximation Under Adaptivity Constraints
Provably Efficient Reinforcement Learning with Linear Function Approximation Under Adaptivity Constraints
Chi Jin
Zhuoran Yang
Zhaoran Wang
OffRL
122
166
0
06 Jan 2021
1