ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.06303
  4. Cited By
Reward Is Enough: LLMs Are In-Context Reinforcement Learners

Reward Is Enough: LLMs Are In-Context Reinforcement Learners

21 May 2025
Kefan Song
Amir Moeini
Peng Wang
Lei Gong
Rohan Chandra
Yanjun Qi
Shangtong Zhang
    ReLMLRM
ArXiv (abs)PDFHTML

Papers citing "Reward Is Enough: LLMs Are In-Context Reinforcement Learners"

2 / 2 papers shown
Title
LLM-First Search: Self-Guided Exploration of the Solution Space
Nathan Herr
Tim Rocktaschel
Roberta Raileanu
LRM
148
0
0
05 Jun 2025
Can large language models explore in-context?
Can large language models explore in-context?
Akshay Krishnamurthy
Keegan Harris
Dylan J. Foster
Cyril Zhang
Aleksandrs Slivkins
LM&RoLLMAGLRM
278
29
0
22 Mar 2024
1