ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.13448
  4. Cited By
Distilling Instruction-following Abilities of Large Language Models with
  Task-aware Curriculum Planning

Distilling Instruction-following Abilities of Large Language Models with Task-aware Curriculum Planning

22 May 2024
Yuanhao Yue
Chengyu Wang
Jun Huang
Peng Wang
    ALM
ArXivPDFHTML

Papers citing "Distilling Instruction-following Abilities of Large Language Models with Task-aware Curriculum Planning"

7 / 7 papers shown
Title
Large Language Models Meet Stance Detection: A Survey of Tasks, Methods, Applications, Challenges and Future Directions
Large Language Models Meet Stance Detection: A Survey of Tasks, Methods, Applications, Challenges and Future Directions
Lata Pangtey
Anukriti Bhatnagar
Shubhi Bansal
Shahid Shafi Dar
Nagendra Kumar
32
0
0
13 May 2025
DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models
DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models
Chengyu Wang
Junbing Yan
Yuanhao Yue
Jun Huang
17
0
0
21 Apr 2025
Training Small Reasoning LLMs with Cognitive Preference Alignment
Training Small Reasoning LLMs with Cognitive Preference Alignment
Wenrui Cai
Chengyu Wang
Junbing Yan
Jun Huang
Xiangzhong Fang
LRM
26
0
0
14 Apr 2025
A Short Survey on Small Reasoning Models: Training, Inference, Applications and Research Directions
A Short Survey on Small Reasoning Models: Training, Inference, Applications and Research Directions
Chengyu Wang
Taolin Zhang
Richang Hong
Jun Huang
ReLM
LRM
39
1
0
12 Apr 2025
Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning
Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning
Ming Li
Lichang Chen
Jiuhai Chen
Shwai He
Heng-Chiao Huang
Jiuxiang Gu
Tianyi Zhou
102
20
0
18 Oct 2023
Distilling Step-by-Step! Outperforming Larger Language Models with Less
  Training Data and Smaller Model Sizes
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Lokesh Nagalapatti
Chun-Liang Li
Chih-Kuan Yeh
Hootan Nakhost
Yasuhisa Fujii
Alexander Ratner
Ranjay Krishna
Chen-Yu Lee
Tomas Pfister
ALM
217
499
0
03 May 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
311
11,915
0
04 Mar 2022
1