Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2307.15190
Cited By
f-Divergence Minimization for Sequence-Level Knowledge Distillation
27 July 2023
Yuqiao Wen
Zichao Li
Wenyu Du
Lili Mou
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"f-Divergence Minimization for Sequence-Level Knowledge Distillation"
9 / 59 papers shown
Title
A Deep Reinforced Model for Abstractive Summarization
Romain Paulus
Caiming Xiong
R. Socher
AI4TS
206
1,558
0
11 May 2017
SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient
Lantao Yu
Weinan Zhang
Jun Wang
Yong Yu
GAN
72
2,405
0
18 Sep 2016
Sequence-Level Knowledge Distillation
Yoon Kim
Alexander M. Rush
122
1,120
0
25 Jun 2016
Deep Reinforcement Learning for Dialogue Generation
Jiwei Li
Will Monroe
Alan Ritter
Michel Galley
Jianfeng Gao
Dan Jurafsky
283
1,338
0
05 Jun 2016
How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation
Chia-Wei Liu
Ryan J. Lowe
Iulian Serban
Michael Noseworthy
Laurent Charlin
Joelle Pineau
104
1,298
0
25 Mar 2016
A Diversity-Promoting Objective Function for Neural Conversation Models
Jiwei Li
Michel Galley
Chris Brockett
Jianfeng Gao
W. Dolan
145
2,401
0
11 Oct 2015
Neural Machine Translation of Rare Words with Subword Units
Rico Sennrich
Barry Haddow
Alexandra Birch
224
7,755
0
31 Aug 2015
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
362
19,660
0
09 Mar 2015
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.9K
150,260
0
22 Dec 2014
Previous
1
2