ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.06108
  4. Cited By
Spoiled for Choice? Personalized Recommendation for Healthcare
  Decisions: A Multi-Armed Bandit Approach

Spoiled for Choice? Personalized Recommendation for Healthcare Decisions: A Multi-Armed Bandit Approach

13 September 2020
Tongxin Zhou
Yingfei Wang
Lu
L. Yan
Yong Tan
ArXivPDFHTML

Papers citing "Spoiled for Choice? Personalized Recommendation for Healthcare Decisions: A Multi-Armed Bandit Approach"

4 / 4 papers shown
Title
Constrained Online Decision-Making: A Unified Framework
Constrained Online Decision-Making: A Unified Framework
Haichen Hu
David Simchi-Levi
Navid Azizan
34
0
0
11 May 2025
A Unified Regularization Approach to High-Dimensional Generalized Tensor Bandits
A Unified Regularization Approach to High-Dimensional Generalized Tensor Bandits
Jiannan Li
Yiyang Yang
Shaojie Tang
Yao Wang
38
0
0
18 Jan 2025
CAREForMe: Contextual Multi-Armed Bandit Recommendation Framework for
  Mental Health
CAREForMe: Contextual Multi-Armed Bandit Recommendation Framework for Mental Health
Sheng Yu
Narjes Nourzad
R. Semple
Yixue Zhao
Emily Zhou
Bhaskar Krishnamachari
20
2
0
26 Jan 2024
A Scalable Recommendation Engine for New Users and Items
A Scalable Recommendation Engine for New Users and Items
Boya Xu
Yiting Deng
C. Mela
30
2
0
06 Sep 2022
1