88

A Unified Language Model for Large Scale Search, Recommendation, and Reasoning

Marco De Nadai
Edoardo DÁmico
Max Lefarov
Alexandre Tamborrino
Divita Vohra
Mark VanMiddlesworth
Shawn Lin
Jacqueline Wood
Jan Stypka
Eliza Klyce
Keshi Dai
Timothy Christopher Heath
Martin D. Gould
Yves Raimond
Sandeep Ghael
Tony Jebara
Andreas Damianou
Vladan Radosavljevic
Paul N. Bennett
Mounia Lalmas
Praveen Chandar
Main:13 Pages
10 Figures
Bibliography:1 Pages
9 Tables
Appendix:1 Pages
Abstract

LLMs are increasingly applied to recommendation, retrieval, and reasoning, yet deploying a single end-to-end model that can jointly support these behaviors over large, heterogeneous catalogs remains challenging. Such systems must generate unambiguous references to real items, handle multiple entity types, and operate under strict latency and reliability constraints requirements that are difficult to satisfy with text-only generation. While tool-augmented recommender systems address parts of this problem, they introduce orchestration complexity and limit end-to-end optimization. We view this setting as an instance of a broader research problem: how to adapt LLMs to reason jointly over multiple-domain entities, users, and language in a fully self-contained manner. To this end, we introduce NEO, a framework that adapts a pre-trained decoder-only LLM into a tool-free, catalog-grounded generator. NEO represents items as SIDs and trains a single model to interleave natural language and typed item identifiers within a shared sequence. Text prompts control the task, target entity type, and output format (IDs, text, or mixed), while constrained decoding guarantees catalog-valid item generation without restricting free-form text. We refer to this instruction-conditioned controllability as language-steerability. We treat SIDs as a distinct modality and study design choices for integrating discrete entity representations into LLMs via staged alignment and instruction tuning. We evaluate NEO at scale on a real-world catalog of over 10M items across multiple media types and discovery tasks, including recommendation, search, and user understanding. In offline experiments, NEO consistently outperforms strong task-specific baselines and exhibits cross-task transfer, demonstrating a practical path toward consolidating large-scale discovery capabilities into a single language-steerable generative model.

View on arXiv
Comments on this paper