2
0

Enhancing News Recommendation with Hierarchical LLM Prompting

Hai-Dang Kieu
Delvin Ce Zhang
Minh Duc Nguyen
Min Xu
Qiang Wu
Dung D. Le
Abstract

Personalized news recommendation systems often struggle to effectively capture the complexity of user preferences, as they rely heavily on shallow representations, such as article titles and abstracts. To address this problem, we introduce a novel method, namely PNR-LLM, for Large Language Models for Personalized News Recommendation. Specifically, PNR-LLM harnesses the generation capabilities of LLMs to enrich news titles and abstracts, and consequently improves recommendation quality. PNR-LLM contains a novel module, News Enrichment via LLMs, which generates deeper semantic information and relevant entities from articles, transforming shallow contents into richer representations. We further propose an attention mechanism to aggregate enriched semantic- and entity-level data, forming unified user and news embeddings that reveal a more accurate user-news match. Extensive experiments on MIND datasets show that PNR-LLM outperforms state-of-the-art baselines. Moreover, the proposed data enrichment module is model-agnostic, and we empirically show that applying our proposed module to multiple existing models can further improve their performance, verifying the advantage of our design.

View on arXiv
@article{kieu2025_2504.20452,
  title={ Enhancing News Recommendation with Hierarchical LLM Prompting },
  author={ Hai-Dang Kieu and Delvin Ce Zhang and Minh Duc Nguyen and Min Xu and Qiang Wu and Dung D. Le },
  journal={arXiv preprint arXiv:2504.20452},
  year={ 2025 }
}
Comments on this paper