ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.19488
26
87
v1v2v3 (latest)

CoLLM: Integrating Collaborative Embeddings into Large Language Models for Recommendation

30 October 2023
Yang Zhang
Fuli Feng
Jizhi Zhang
Keqin Bao
Qifan Wang
Xiangnan He
ArXiv (abs)PDFHTML
Main:10 Pages
4 Figures
Bibliography:2 Pages
7 Tables
Abstract

Leveraging Large Language Models as Recommenders (LLMRec) has gained significant attention and introduced fresh perspectives in user preference modeling. Existing LLMRec approaches prioritize text semantics, usually neglecting the valuable collaborative information from user-item interactions in recommendations. While these text-emphasizing approaches excel in cold-start scenarios, they may yield sub-optimal performance in warm-start situations. In pursuit of superior recommendations for both cold and warm start scenarios, we introduce CoLLM, an innovative LLMRec methodology that seamlessly incorporates collaborative information into LLMs for recommendation. CoLLM captures collaborative information through an external traditional model and maps it to the input token embedding space of LLM, forming collaborative embeddings for LLM usage. Through this external integration of collaborative information, CoLLM ensures effective modeling of collaborative information without modifying the LLM itself, providing the flexibility to employ various collaborative information modeling techniques. Extensive experiments validate that CoLLM adeptly integrates collaborative information into LLMs, resulting in enhanced recommendation performance. We release the code and data at this https URL.

View on arXiv
@article{zhang2025_2310.19488,
  title={ CoLLM: Integrating Collaborative Embeddings into Large Language Models for Recommendation },
  author={ Yang Zhang and Fuli Feng and Jizhi Zhang and Keqin Bao and Qifan Wang and Xiangnan He },
  journal={arXiv preprint arXiv:2310.19488},
  year={ 2025 }
}
Comments on this paper