Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2305.10010
Cited By
AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression
17 May 2023
Siyue Wu
Hongzhan Chen
Xiaojun Quan
Qifan Wang
Rui-cang Wang
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression"
5 / 5 papers shown
Title
Advantage-Guided Distillation for Preference Alignment in Small Language Models
Shiping Gao
Fanqi Wan
Jiajian Guo
Xiaojun Quan
Qifan Wang
ALM
58
0
0
25 Feb 2025
Efficient Knowledge Distillation: Empowering Small Language Models with Teacher Model Insights
Mohamad Ballout
U. Krumnack
Gunther Heidemann
Kai-Uwe Kühnberger
35
2
0
19 Sep 2024
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
398
8,559
0
28 Jan 2022
Distilling Linguistic Context for Language Model Compression
Geondo Park
Gyeongman Kim
Eunho Yang
48
38
0
17 Sep 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
299
6,984
0
20 Apr 2018
1