ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.23743
  4. Cited By
What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective

What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective

31 October 2024
Ming Li
Yanhong Li
Tianyi Zhou
    LRM
    AI4CE
ArXivPDFHTML

Papers citing "What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective"

2 / 2 papers shown
Title
Unveiling the Key Factors for Distilling Chain-of-Thought Reasoning
Unveiling the Key Factors for Distilling Chain-of-Thought Reasoning
Xinghao Chen
Zhijing Sun
Wenjin Guo
Miaoran Zhang
Yanjun Chen
...
Hui Su
Yijie Pan
Dietrich Klakow
Wenjie Li
Xiaoyu Shen
LRM
111
8
0
25 Feb 2025
When More is Less: Understanding Chain-of-Thought Length in LLMs
When More is Less: Understanding Chain-of-Thought Length in LLMs
Yuyang Wu
Yifei Wang
Tianqi Du
Stefanie Jegelka
Yisen Wang
Yisen Wang
LRM
129
39
0
11 Feb 2025
1