ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.09268
  4. Cited By
Training Large-Scale News Recommenders with Pretrained Language Models
  in the Loop

Training Large-Scale News Recommenders with Pretrained Language Models in the Loop

18 February 2021
Shitao Xiao
Zheng Liu
Yingxia Shao
Tao Di
Xing Xie
    VLM
    AIFin
ArXivPDFHTML

Papers citing "Training Large-Scale News Recommenders with Pretrained Language Models in the Loop"

4 / 4 papers shown
Title
Uncovering Cross-Domain Recommendation Ability of Large Language Models
Xinyi Liu
Ruijie Wang
Dachun Sun
Dilek Hakkani-Tur
Tarek F. Abdelzaher
150
0
0
10 Mar 2025
Taxonomy-Guided Zero-Shot Recommendations with LLMs
Taxonomy-Guided Zero-Shot Recommendations with LLMs
Yueqing Liang
Liangwei Yang
Chen Wang
Xiongxiao Xu
Philip S. Yu
Kai Shu
72
6
0
21 Feb 2025
Revisiting Language Models in Neural News Recommender Systems
Revisiting Language Models in Neural News Recommender Systems
Yuyue Zhao
Jin Huang
David Vos
Maarten de Rijke
KELM
104
0
0
20 Jan 2025
RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked
  Auto-Encoder
RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked Auto-Encoder
Shitao Xiao
Zheng Liu
Yingxia Shao
Zhao Cao
RALM
118
109
0
24 May 2022
1