ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.21212
  4. Cited By
Transformers Learn to Implement Multi-step Gradient Descent with Chain of Thought

Transformers Learn to Implement Multi-step Gradient Descent with Chain of Thought

28 February 2025
Jianhao Huang
Zixuan Wang
Jason D. Lee
    LRM
ArXiv (abs)PDFHTML

Papers citing "Transformers Learn to Implement Multi-step Gradient Descent with Chain of Thought"

2 / 2 papers shown
Title
Deciphering Trajectory-Aided LLM Reasoning: An Optimization Perspective
Deciphering Trajectory-Aided LLM Reasoning: An Optimization Perspective
Junnan Liu
Hongwei Liu
Linchen Xiao
Shudong Liu
Taolin Zhang
Zihan Ma
Songyang Zhang
Kai Chen
LRM
102
0
0
26 May 2025
Training Dynamics of In-Context Learning in Linear Attention
Training Dynamics of In-Context Learning in Linear Attention
Yedi Zhang
Aaditya K. Singh
Peter E. Latham
Andrew Saxe
MLT
98
3
0
27 Jan 2025
1