ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.07425
  4. Cited By
Bandit Social Learning: Exploration under Myopic Behavior

Bandit Social Learning: Exploration under Myopic Behavior

15 February 2023
Kiarash Banihashem
Mohammadtaghi Hajiaghayi
Suho Shin
Aleksandrs Slivkins
ArXivPDFHTML

Papers citing "Bandit Social Learning: Exploration under Myopic Behavior"

15 / 15 papers shown
Title
Greedy Algorithm for Structured Bandits: A Sharp Characterization of Asymptotic Success / Failure
Greedy Algorithm for Structured Bandits: A Sharp Characterization of Asymptotic Success / Failure
Aleksandrs Slivkins
Yunzong Xu
Shiliang Zuo
258
1
0
06 Mar 2025
Exploration and Persuasion
Exploration and Persuasion
Aleksandrs Slivkins
193
12
0
22 Oct 2024
Can large language models explore in-context?
Can large language models explore in-context?
Akshay Krishnamurthy
Keegan Harris
Dylan J. Foster
Cyril Zhang
Aleksandrs Slivkins
LM&Ro
LLMAG
LRM
201
24
0
22 Mar 2024
Incentivized Learning in Principal-Agent Bandit Games
Incentivized Learning in Principal-Agent Bandit Games
Antoine Scheid
D. Tiapkin
Etienne Boursier
Aymeric Capitaine
El-Mahdi El-Mhamdi
Eric Moulines
Michael I. Jordan
Alain Durmus
95
8
0
06 Mar 2024
Replication-proof Bandit Mechanism Design with Bayesian Agents
Replication-proof Bandit Mechanism Design with Bayesian Agents
Seyed A. Esmaeili
Mohammadtaghi Hajiaghayi
Suho Shin
71
1
0
28 Dec 2023
Incentivized Collaboration in Active Learning
Incentivized Collaboration in Active Learning
Lee Cohen
Han Shao
FedML
95
0
0
01 Nov 2023
Be Greedy in Multi-Armed Bandits
Be Greedy in Multi-Armed Bandits
Matthieu Jedor
Jonathan Louëdec
Vianney Perchet
181
8
0
04 Jan 2021
Greedy Algorithm almost Dominates in Smoothed Contextual Bandits
Greedy Algorithm almost Dominates in Smoothed Contextual Bandits
Manish Raghavan
Aleksandrs Slivkins
Jennifer Wortman Vaughan
Zhiwei Steven Wu
162
18
0
19 May 2020
The Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed
  Bandit with Many Arms
The Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms
Mohsen Bayati
N. Hamidi
Ramesh Johari
Khashayar Khosravi
179
28
0
24 Feb 2020
The Price of Incentivizing Exploration: A Characterization via Thompson
  Sampling and Sample Complexity
The Price of Incentivizing Exploration: A Characterization via Thompson Sampling and Sample Complexity
Mark Sellke
Aleksandrs Slivkins
44
34
0
03 Feb 2020
Introduction to Multi-Armed Bandits
Introduction to Multi-Armed Bandits
Aleksandrs Slivkins
251
999
0
15 Apr 2019
On the Non-asymptotic and Sharp Lower Tail Bounds of Random Variables
On the Non-asymptotic and Sharp Lower Tail Bounds of Random Variables
Anru R. Zhang
Yuchen Zhou
52
61
0
21 Oct 2018
The Externalities of Exploration and How Data Diversity Helps
  Exploitation
The Externalities of Exploration and How Data Diversity Helps Exploitation
Manish Raghavan
Aleksandrs Slivkins
Jennifer Wortman Vaughan
Zhiwei Steven Wu
114
52
0
01 Jun 2018
Mostly Exploration-Free Algorithms for Contextual Bandits
Mostly Exploration-Free Algorithms for Contextual Bandits
Hamsa Bastani
Mohsen Bayati
Khashayar Khosravi
166
156
0
28 Apr 2017
Thompson Sampling: An Asymptotically Optimal Finite Time Analysis
Thompson Sampling: An Asymptotically Optimal Finite Time Analysis
E. Kaufmann
N. Korda
Rémi Munos
102
585
0
18 May 2012
1