Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2102.04887
Cited By
NewsBERT: Distilling Pre-trained Language Model for Intelligent News Application
9 February 2021
Chuhan Wu
Fangzhao Wu
Yang Yu
Tao Qi
Yongfeng Huang
Qi Liu
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"NewsBERT: Distilling Pre-trained Language Model for Intelligent News Application"
9 / 9 papers shown
Title
Does Knowledge Distillation Matter for Large Language Model based Bundle Generation?
Kaidong Feng
Zhu Sun
Jie Yang
Hui Fang
Xinghua Qu
Wen Liu
48
0
0
24 Apr 2025
Revisiting Language Models in Neural News Recommender Systems
Yuyue Zhao
Jin Huang
David Vos
Maarten de Rijke
KELM
186
0
0
20 Jan 2025
EmbSum: Leveraging the Summarization Capabilities of Large Language Models for Content-Based Recommendations
Chiyu Zhang
Yifei Sun
Minghao Wu
Jun Chen
Jie Lei
...
Angli Liu
Ji Zhu
Sem Park
Ning Yao
Bo Long
OffRL
28
4
0
19 May 2024
ONCE: Boosting Content-based Recommendation with Both Open- and Closed-source Large Language Models
Qijiong Liu
Nuo Chen
Tetsuya Sakai
Xiao-Ming Wu
39
51
0
11 May 2023
Gradient Knowledge Distillation for Pre-trained Language Models
Lean Wang
Lei Li
Xu Sun
VLM
28
5
0
02 Nov 2022
User recommendation system based on MIND dataset
Niran A. Abdulhussein
Ahmed J. Obaid
34
2
0
06 Sep 2022
Few-shot News Recommendation via Cross-lingual Transfer
Taicheng Guo
Lu Yu
B. Shihada
Xiangliang Zhang
31
10
0
28 Jul 2022
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
246
1,454
0
18 Mar 2020
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
Canwen Xu
Wangchunshu Zhou
Tao Ge
Furu Wei
Ming Zhou
231
198
0
07 Feb 2020
1