ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.04887
  4. Cited By
NewsBERT: Distilling Pre-trained Language Model for Intelligent News
  Application

NewsBERT: Distilling Pre-trained Language Model for Intelligent News Application

9 February 2021
Chuhan Wu
Fangzhao Wu
Yang Yu
Tao Qi
Yongfeng Huang
Qi Liu
    VLM
ArXivPDFHTML

Papers citing "NewsBERT: Distilling Pre-trained Language Model for Intelligent News Application"

9 / 9 papers shown
Title
Does Knowledge Distillation Matter for Large Language Model based Bundle Generation?
Does Knowledge Distillation Matter for Large Language Model based Bundle Generation?
Kaidong Feng
Zhu Sun
Jie Yang
Hui Fang
Xinghua Qu
Wen Liu
48
0
0
24 Apr 2025
Revisiting Language Models in Neural News Recommender Systems
Revisiting Language Models in Neural News Recommender Systems
Yuyue Zhao
Jin Huang
David Vos
Maarten de Rijke
KELM
192
0
0
20 Jan 2025
EmbSum: Leveraging the Summarization Capabilities of Large Language
  Models for Content-Based Recommendations
EmbSum: Leveraging the Summarization Capabilities of Large Language Models for Content-Based Recommendations
Chiyu Zhang
Yifei Sun
Minghao Wu
Jun Chen
Jie Lei
...
Angli Liu
Ji Zhu
Sem Park
Ning Yao
Bo Long
OffRL
28
4
0
19 May 2024
ONCE: Boosting Content-based Recommendation with Both Open- and
  Closed-source Large Language Models
ONCE: Boosting Content-based Recommendation with Both Open- and Closed-source Large Language Models
Qijiong Liu
Nuo Chen
Tetsuya Sakai
Xiao-Ming Wu
39
51
0
11 May 2023
Gradient Knowledge Distillation for Pre-trained Language Models
Gradient Knowledge Distillation for Pre-trained Language Models
Lean Wang
Lei Li
Xu Sun
VLM
28
5
0
02 Nov 2022
User recommendation system based on MIND dataset
User recommendation system based on MIND dataset
Niran A. Abdulhussein
Ahmed J. Obaid
34
2
0
06 Sep 2022
Few-shot News Recommendation via Cross-lingual Transfer
Few-shot News Recommendation via Cross-lingual Transfer
Taicheng Guo
Lu Yu
B. Shihada
Xiangliang Zhang
31
10
0
28 Jul 2022
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
246
1,454
0
18 Mar 2020
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
Canwen Xu
Wangchunshu Zhou
Tao Ge
Furu Wei
Ming Zhou
231
198
0
07 Feb 2020
1