ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.10587
  4. Cited By
RDRec: Rationale Distillation for LLM-based Recommendation

RDRec: Rationale Distillation for LLM-based Recommendation

17 May 2024
Xinfeng Wang
Jin Cui
Yoshimi Suzuki
Fumiyo Fukumoto
    LRM
ArXivPDFHTML

Papers citing "RDRec: Rationale Distillation for LLM-based Recommendation"

6 / 6 papers shown
Title
Does Knowledge Distillation Matter for Large Language Model based Bundle Generation?
Does Knowledge Distillation Matter for Large Language Model based Bundle Generation?
Kaidong Feng
Zhu Sun
Jie Yang
Hui Fang
Xinghua Qu
Wei Liu
48
0
0
24 Apr 2025
Tapping the Potential of Large Language Models as Recommender Systems: A Comprehensive Framework and Empirical Analysis
Tapping the Potential of Large Language Models as Recommender Systems: A Comprehensive Framework and Empirical Analysis
Lanling Xu
Junjie Zhang
Bingqian Li
Jinpeng Wang
Sheng Chen
Wayne Xin Zhao
Ji-Rong Wen
79
18
0
17 Jan 2025
LlamaRec: Two-Stage Recommendation using Large Language Models for
  Ranking
LlamaRec: Two-Stage Recommendation using Large Language Models for Ranking
Zhenrui Yue
Sara Rabhi
G. D. S. P. Moreira
Dong Wang
Even Oldridge
LRM
46
36
0
25 Oct 2023
EXPLAIN, EDIT, GENERATE: Rationale-Sensitive Counterfactual Data
  Augmentation for Multi-hop Fact Verification
EXPLAIN, EDIT, GENERATE: Rationale-Sensitive Counterfactual Data Augmentation for Multi-hop Fact Verification
Yingjie Zhu
Jiasheng Si
Yibo Zhao
Haiyang Zhu
Deyu Zhou
Yulan He
41
6
0
23 Oct 2023
Distilling Step-by-Step! Outperforming Larger Language Models with Less
  Training Data and Smaller Model Sizes
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Lokesh Nagalapatti
Chun-Liang Li
Chih-Kuan Yeh
Hootan Nakhost
Yasuhisa Fujii
Alexander Ratner
Ranjay Krishna
Chen-Yu Lee
Tomas Pfister
ALM
220
502
0
03 May 2023
A Token-level Reference-free Hallucination Detection Benchmark for
  Free-form Text Generation
A Token-level Reference-free Hallucination Detection Benchmark for Free-form Text Generation
Tianyu Liu
Yizhe Zhang
Chris Brockett
Yi Mao
Zhifang Sui
Weizhu Chen
W. Dolan
HILM
222
143
0
18 Apr 2021
1