Recommender systems (RS) are increasingly vulnerable to shilling attacks, where adversaries inject fake user profiles to manipulate system outputs. Traditional attack strategies often rely on simplistic heuristics, require access to internal RS data, and overlook the manipulation potential of textual reviews. In this work, we introduce Agent4SR, a novel framework that leverages Large Language Model (LLM)-based agents to perform low-knowledge, high-impact shilling attacks through both rating and review generation. Agent4SR simulates realistic user behavior by orchestrating adversarial interactions, selecting items, assigning ratings, and crafting reviews, while maintaining behavioral plausibility. Our design includes targeted profile construction, hybrid memory retrieval, and a review attack strategy that propagates target item features across unrelated reviews to amplify manipulation. Extensive experiments on multiple datasets and RS architectures demonstrate that Agent4SR outperforms existing low-knowledge baselines in both effectiveness and stealth. Our findings reveal a new class of emergent threats posed by LLM-driven agents, underscoring the urgent need for enhanced defenses in modern recommender systems.
View on arXiv@article{gu2025_2505.13528, title={ LLM-Based User Simulation for Low-Knowledge Shilling Attacks on Recommender Systems }, author={ Shengkang Gu and Jiahao Liu and Dongsheng Li and Guangping Zhang and Mingzhe Han and Hansu Gu and Peng Zhang and Ning Gu and Li Shang and Tun Lu }, journal={arXiv preprint arXiv:2505.13528}, year={ 2025 } }