ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2001.05699
  4. Cited By
Combining Offline Causal Inference and Online Bandit Learning for Data
  Driven Decision

Combining Offline Causal Inference and Online Bandit Learning for Data Driven Decision

16 January 2020
Li Ye
Yishi Lin
Hong Xie
John C. S. Lui
    CML
ArXivPDFHTML

Papers citing "Combining Offline Causal Inference and Online Bandit Learning for Data Driven Decision"

2 / 2 papers shown
Title
Evaluation Methods and Measures for Causal Learning Algorithms
Evaluation Methods and Measures for Causal Learning Algorithms
Lu Cheng
Ruocheng Guo
Raha Moraffah
Paras Sheth
K. S. Candan
Huan Liu
CML
ELM
31
51
0
07 Feb 2022
Bounded regret in stochastic multi-armed bandits
Bounded regret in stochastic multi-armed bandits
Sébastien Bubeck
Vianney Perchet
Philippe Rigollet
73
91
0
06 Feb 2013
1