ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.15833
5
0

Architecture is All You Need: Improving LLM Recommenders by Dropping the Text

18 June 2025
Kevin Foley
Shaghayegh Agah
Kavya Priyanka Kakinada
ArXiv (abs)PDFHTML
Main:5 Pages
1 Figures
Bibliography:2 Pages
4 Tables
Abstract

In recent years, there has been an explosion of interest in the applications of large pre-trained language models (PLMs) to recommender systems, with many studies showing strong performance of PLMs on common benchmark datasets. PLM-based recommender models benefit from flexible and customizable prompting, an unlimited vocabulary of recommendable items, and general ``world knowledge'' acquired through pre-training on massive text corpora. While PLM-based recommenders show promise in settings where data is limited, they are hard to implement in practice due to their large size and computational cost. Additionally, fine-tuning PLMs to improve performance on collaborative signals may degrade the model's capacity for world knowledge and generalizability. We propose a recommender model that uses the architecture of large language models (LLMs) while reducing layer count and dimensions and replacing the text-based subword tokenization of a typical LLM with discrete tokens that uniquely represent individual content items. We find that this simplified approach substantially outperforms both traditional sequential recommender models and PLM-based recommender models at a tiny fraction of the size and computational complexity of PLM-based models. Our results suggest that the principal benefit of LLMs in recommender systems is their architecture, rather than the world knowledge acquired during extensive pre-training.

View on arXiv
@article{foley2025_2506.15833,
  title={ Architecture is All You Need: Improving LLM Recommenders by Dropping the Text },
  author={ Kevin Foley and Shaghayegh Agah and Kavya Priyanka Kakinada },
  journal={arXiv preprint arXiv:2506.15833},
  year={ 2025 }
}
Comments on this paper