Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2302.07425
Cited By
Bandit Social Learning: Exploration under Myopic Behavior
15 February 2023
Kiarash Banihashem
Mohammadtaghi Hajiaghayi
Suho Shin
Aleksandrs Slivkins
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Bandit Social Learning: Exploration under Myopic Behavior"
15 / 15 papers shown
Title
Greedy Algorithm for Structured Bandits: A Sharp Characterization of Asymptotic Success / Failure
Aleksandrs Slivkins
Yunzong Xu
Shiliang Zuo
223
1
0
06 Mar 2025
Exploration and Persuasion
Aleksandrs Slivkins
169
12
0
22 Oct 2024
Can large language models explore in-context?
Akshay Krishnamurthy
Keegan Harris
Dylan J. Foster
Cyril Zhang
Aleksandrs Slivkins
LM&Ro
LLMAG
LRM
194
24
0
22 Mar 2024
Incentivized Learning in Principal-Agent Bandit Games
Antoine Scheid
D. Tiapkin
Etienne Boursier
Aymeric Capitaine
El-Mahdi El-Mhamdi
Eric Moulines
Michael I. Jordan
Alain Durmus
89
8
0
06 Mar 2024
Replication-proof Bandit Mechanism Design with Bayesian Agents
Seyed A. Esmaeili
Mohammadtaghi Hajiaghayi
Suho Shin
67
1
0
28 Dec 2023
Incentivized Collaboration in Active Learning
Lee Cohen
Han Shao
FedML
91
0
0
01 Nov 2023
Be Greedy in Multi-Armed Bandits
Matthieu Jedor
Jonathan Louëdec
Vianney Perchet
158
8
0
04 Jan 2021
Greedy Algorithm almost Dominates in Smoothed Contextual Bandits
Manish Raghavan
Aleksandrs Slivkins
Jennifer Wortman Vaughan
Zhiwei Steven Wu
138
18
0
19 May 2020
The Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms
Mohsen Bayati
N. Hamidi
Ramesh Johari
Khashayar Khosravi
158
28
0
24 Feb 2020
The Price of Incentivizing Exploration: A Characterization via Thompson Sampling and Sample Complexity
Mark Sellke
Aleksandrs Slivkins
36
34
0
03 Feb 2020
Introduction to Multi-Armed Bandits
Aleksandrs Slivkins
218
999
0
15 Apr 2019
On the Non-asymptotic and Sharp Lower Tail Bounds of Random Variables
Anru R. Zhang
Yuchen Zhou
48
61
0
21 Oct 2018
The Externalities of Exploration and How Data Diversity Helps Exploitation
Manish Raghavan
Aleksandrs Slivkins
Jennifer Wortman Vaughan
Zhiwei Steven Wu
102
52
0
01 Jun 2018
Mostly Exploration-Free Algorithms for Contextual Bandits
Hamsa Bastani
Mohsen Bayati
Khashayar Khosravi
142
156
0
28 Apr 2017
Thompson Sampling: An Asymptotically Optimal Finite Time Analysis
E. Kaufmann
N. Korda
Rémi Munos
79
585
0
18 May 2012
1